Sample records for source location algorithm

  1. Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor

    NASA Astrophysics Data System (ADS)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.

  2. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  3. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  4. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements.

    PubMed

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S

    2018-06-01

    Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  5. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  6. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  7. Locating the source of diffusion in complex networks by time-reversal backward spreading.

    PubMed

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  8. Locating the source of diffusion in complex networks by time-reversal backward spreading

    NASA Astrophysics Data System (ADS)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  9. Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Schumacher, Thomas; Straub, Daniel; Higgins, Christopher

    2012-09-01

    Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.

  10. Microseismic response characteristics modeling and locating of underground water supply pipe leak

    NASA Astrophysics Data System (ADS)

    Wang, J.; Liu, J.

    2015-12-01

    In traditional methods of pipeline leak location, geophones must be located on the pipe wall. If the exact location of the pipeline is unknown, the leaks cannot be identified accurately. To solve this problem, taking into account the characteristics of the pipeline leak, we propose a continuous random seismic source model and construct geological models to investigate the proposed method for locating underground pipeline leaks. Based on two dimensional (2D) viscoacoustic equations and the staggered grid finite-difference (FD) algorithm, the microseismic wave field generated by a leaking pipe is modeled. Cross-correlation analysis and the simulated annealing (SA) algorithm were utilized to obtain the time difference and the leak location. We also analyze and discuss the effect of the number of recorded traces, the survey layout, and the offset and interval of the traces on the accuracy of the estimated location. The preliminary results of the simulation and data field experiment indicate that (1) a continuous random source can realistically represent the leak microseismic wave field in a simulation using 2D visco-acoustic equations and a staggered grid FD algorithm. (2) The cross-correlation method is effective for calculating the time difference of the direct wave relative to the reference trace. However, outside the refraction blind zone, the accuracy of the time difference is reduced by the effects of the refracted wave. (3) The acquisition method of time difference based on the microseismic theory and SA algorithm has a great potential for locating leaks from underground pipelines from an array located on the ground surface. Keywords: Viscoacoustic finite-difference simulation; continuous random source; simulated annealing algorithm; pipeline leak location

  11. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  12. Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)

    NASA Astrophysics Data System (ADS)

    Leśniak, Andrzej; Pszczoła, Grzegorz

    2008-08-01

    A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.

  13. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE PAGES

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  14. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  15. Alignment of leading-edge and peak-picking time of arrival methods to obtain accurate source locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussel-Dupre, R.; Symbalisty, E.; Fox, C.

    2009-08-01

    The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less

  16. Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming

    2017-05-01

    Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.

  17. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  18. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  19. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    NASA Astrophysics Data System (ADS)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  20. Locating the source of spreading in temporal networks

    NASA Astrophysics Data System (ADS)

    Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Yi, Dongyun

    2017-02-01

    The topological structure of many real networks changes with time. Thus, locating the sources of a temporal network is a creative and challenging problem, as the enormous size of many real networks makes it unfeasible to observe the state of all nodes. In this paper, we propose an algorithm to solve this problem, named the backward temporal diffusion process. The proposed algorithm calculates the shortest temporal distance to locate the transmission source. We assume that the spreading process can be modeled as a simple diffusion process and by consensus dynamics. To improve the location accuracy, we also adopt four strategies to select which nodes should be observed by ranking their importance in the temporal network. Our paper proposes a highly accurate method for locating the source in temporal networks and is, to the best of our knowledge, a frontier work in this field. Moreover, our framework has important significance for controlling the transmission of diseases or rumors and formulating immediate immunization strategies.

  1. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.

  2. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  3. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.

  4. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  5. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  6. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  7. Algorithms for System Identification and Source Location.

    NASA Astrophysics Data System (ADS)

    Nehorai, Arye

    This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Berry, M. L..; Grieme, M.

    We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.

  9. Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    NASA Astrophysics Data System (ADS)

    Bae, Eun-Hyon; Lee, Kyun-Kyung

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  10. EEG and MEG source localization using recursively applied (RAP) MUSIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    1996-12-31

    The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less

  11. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.

    PubMed

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.

  12. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  13. North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  14. An FBG acoustic emission source locating system based on PHAT and GA

    NASA Astrophysics Data System (ADS)

    Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun

    2017-09-01

    Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.

  15. Application Research of Horn Array Multi-Beam Antenna in Reference Source System for Satellite Interference Location

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Lin, Hui; Zhang, Qi

    2018-01-01

    The reference source system is a key factor to ensure the successful location of the satellite interference source. Currently, the traditional system used a mechanical rotating antenna which leaded to the disadvantages of slow rotation and high failure-rate, which seriously restricted the system’s positioning-timeliness and became its obvious weaknesses. In this paper, a multi-beam antenna scheme based on the horn array was proposed as a reference source for the satellite interference location, which was used as an alternative to the traditional reference source antenna. The new scheme has designed a small circularly polarized horn antenna as an element and proposed a multi-beamforming algorithm based on planar array. Moreover, the simulation analysis of horn antenna pattern, multi-beam forming algorithm and simulated satellite link cross-ambiguity calculation have been carried out respectively. Finally, cross-ambiguity calculation of the traditional reference source system has also been tested. The comparison between the results of computer simulation and the actual test results shows that the scheme is scientific and feasible, obviously superior to the traditional reference source system.

  16. Developing a system for blind acoustic source localization and separation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Raghavendra

    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.

  17. Subspace-based analysis of the ERT inverse problem

    NASA Astrophysics Data System (ADS)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  18. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  19. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  20. An Automated Method for Navigation Assessment for Earth Survey Sensors Using Island Targets

    NASA Technical Reports Server (NTRS)

    Patt, F. S.; Woodward, R. H.; Gregg, W. W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalogue of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean colour sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  1. Automated navigation assessment for earth survey sensors using island targets

    NASA Technical Reports Server (NTRS)

    Patt, Frederick S.; Woodward, Robert H.; Gregg, Watson W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalog of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean color sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  2. Characterization of Methane Emission Sources Using Genetic Algorithms and Atmospheric Transport Modeling

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.

    2016-12-01

    Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.

  3. Target-oriented imaging of hydraulic fractures by applying the staining algorithm for downhole microseismic migration

    NASA Astrophysics Data System (ADS)

    Lin, Ye; Zhang, Haijiang; Jia, Xiaofeng

    2018-03-01

    For microseismic monitoring of hydraulic fracturing, microseismic migration can be used to image the fracture network with scattered microseismic waves. Compared with conventional microseismic location-based fracture characterization methods, microseismic migration can better constrain the stimulated reservoir volume regardless of the completeness of detected and located microseismic sources. However, the imaging results from microseismic migration may suffer from the contamination of other structures and thus the target fracture zones may not be illuminated properly. To solve this issue, in this study we propose a target-oriented staining algorithm for microseismic reverse-time migration. In the staining algorithm, the target area is first stained by constructing an imaginary velocity field and then a synchronized source wavefield only concerning the target structure is produced. As a result, a synchronized image from imaging with the synchronized source wavefield mainly contains the target structures. Synthetic tests based on a downhole microseismic monitoring system show that the target-oriented microseismic reverse-time migration method improves the illumination of target areas.

  4. Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique

    NASA Technical Reports Server (NTRS)

    Tiampo, Kristy F.

    1999-01-01

    In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.

  5. Acoustic Source Elevation Angle Estimates Using Two Microphones

    DTIC Science & Technology

    2014-06-01

    elevated. Elevation angles are successfully estimated, under certain conditions, for a loudspeaker broadcasting band limited white noise. 15. SUBJECT...INTENTIONALLY LEFT BLANK. 1 1. Introduction The U.S. Army uses acoustic arrays to track and locate various sources including...ground and airborne vehicles, small arms, mortars, and rockets. The tracking and locating algorithms often used with these acoustic arrays perform

  6. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less

  8. Localization of source with unknown amplitude using IPMC sensor arrays

    NASA Astrophysics Data System (ADS)

    Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo

    2011-04-01

    The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.

  9. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  10. Gossip-based solutions for discrete rendezvous in populations of communicating agents.

    PubMed

    Hollander, Christopher D; Wu, Annie S

    2014-01-01

    The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm.

  11. Gossip-Based Solutions for Discrete Rendezvous in Populations of Communicating Agents

    PubMed Central

    Hollander, Christopher D.; Wu, Annie S.

    2014-01-01

    The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm. PMID:25397882

  12. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  13. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  14. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    NASA Astrophysics Data System (ADS)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.

  15. Characterization of pollutant dispersion near elongated ...

    EPA Pesticide Factsheets

    This paper presents a wind tunnel study of the effects of elongated rectangular buildings on the dispersion of pollutants from nearby stacks. The study examines the influence of source location, building aspect ratio, and wind direction on pollutant dispersion with the goal of developing improved algorithms within dispersion models. The paper also examines the current AERMOD/PRIME modeling capabilities compared to wind tunnel observations. Differences in the amount of plume material entrained in the wake region downwind of a building for various source locations and source heights are illustrated with vertical and lateral concentration profiles. These profiles were parameterized using the Gaussian equation and show the influence of building/source configurations on those parameters. When the building is oriented at 45° to the approach flow, for example, the effective plume height descends more rapidly than it does for a perpendicular building, enhancing the resulting surface concentrations in the wake region. Buildings at angles to the wind cause a cross-wind shift in the location of the plume resulting from a lateral mean flow established in the building wake. These and other effects that are not well represented in many dispersion models are important considerations when developing improved algorithms to estimate the location and magnitude of concentrations downwind of elongated buildings. The National Exposure Research Laboratory (NERL) Computational Exposur

  16. Characterization of pollutant dispersion near elongated buildings based on wind tunnel simulations

    NASA Astrophysics Data System (ADS)

    Perry, S. G.; Heist, D. K.; Brouwer, L. H.; Monbureau, E. M.; Brixey, L. A.

    2016-10-01

    This paper presents a wind tunnel study of the effects of elongated rectangular buildings on the dispersion of pollutants from nearby stacks. The study examines the influence of source location, building aspect ratio, and wind direction on pollutant dispersion with the goal of developing improved algorithms within dispersion models. The paper also examines the current AERMOD/PRIME modeling capabilities compared to wind tunnel observations. Differences in the amount of plume material entrained in the wake region downwind of a building for various source locations and source heights are illustrated with vertical and lateral concentration profiles. These profiles were parameterized using the Gaussian equation and show the influence of building/source configurations on those parameters. When the building is oriented at 45° to the approach flow, for example, the effective plume height descends more rapidly than it does for a perpendicular building, enhancing the resulting surface concentrations in the wake region. Buildings at angles to the wind cause a cross-wind shift in the location of the plume resulting from a lateral mean flow established in the building wake. These and other effects that are not well represented in many dispersion models are important considerations when developing improved algorithms to estimate the location and magnitude of concentrations downwind of elongated buildings.

  17. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  18. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  19. Rapidly locating and characterizing pollutant releases in buildings.

    PubMed

    Sohn, Michael D; Reynolds, Pamela; Singh, Navtej; Gadgil, Ashok J

    2002-12-01

    Releases of airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are taken. However, possible responses can include conflicting strategies, such as shutting the ventilation system off versus running it in a purge mode or having occupants evacuate versus sheltering in place. The proper choice depends in part on knowing the source locations, the amounts released, and the likely future dispersion routes of the pollutants. We present an approach that estimates this information in real time. It applies Bayesian statistics to interpret measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. We demonstrate the approach using a hypothetical pollutant release in a five-room building. Unknowns to the interpretation algorithm include location, duration, and strength of the source, and some building and weather conditions. Two sensor sampling plans and three levels of data quality are examined. Data interpretation in all examples is rapid; however, locating and characterizing the source with high probability depends on the amount and quality of data and the sampling plan.

  20. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    PubMed

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  2. A search for rapidly modulated emission in bright X-ray sources using the HEAO A-1 data base

    NASA Technical Reports Server (NTRS)

    Fairbank, William M.

    1987-01-01

    A search was performed in the HEAO A-1 Data Base (located at the Naval Research Laboratory in Washington, D.C.) for evidence of rapidly-rotating neutron stars that could be sources of coherent gravitational radiation. A new data analysis algorithm, which was developed, is described. The algorithm was applied to data from observations of Cyg X-2, Cyg X-3, and 1820-30. Upper limits on pulse fraction were derived and reported.

  3. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  4. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    PubMed

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  5. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  6. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  7. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  8. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  9. Chemical Source Localization Fusing Concentration Information in the Presence of Chemical Background Noise.

    PubMed

    Pomareda, Víctor; Magrans, Rudys; Jiménez-Soto, Juan M; Martínez, Dani; Tresánchez, Marcel; Burgués, Javier; Palacín, Jordi; Marco, Santiago

    2017-04-20

    We present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.

  10. INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE

    EPA Science Inventory

    INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...

  11. The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms

    NASA Astrophysics Data System (ADS)

    Shao, Xuan-Min; Lay, Erin H.

    2016-07-01

    Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and other related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. The sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.

  12. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    NASA Technical Reports Server (NTRS)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  13. Planar location of the simulative acoustic source based on fiber optic sensor array

    NASA Astrophysics Data System (ADS)

    Liang, Yi-Jun; Liu, Jun-feng; Zhang, Qiao-ping; Mu, Lin-lin

    2010-06-01

    A fiber optic sensor array which is structured by four Sagnac fiber optic sensors is proposed to detect and locate a simulative source of acoustic emission (AE). The sensing loops of Sagnac interferometer (SI) are regarded as point sensors as their small size. Based on the derived output light intensity expression of SI, the optimum work condition of the Sagnac fiber optic sensor is discussed through the simulation of MATLAB. Four sensors are respectively placed on a steel plate to structure the sensor array and the location algorithms are expatiated. When an impact is generated by an artificial AE source at any position of the plate, the AE signal will be detected by four sensors at different times. With the help of a single chip microcomputer (SCM) which can calculate the position of the AE source and display it on LED, we have implemented an intelligent detection and location.

  14. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  15. Acoustic emission source location in composite structure by Voronoi construction using geodesic curve evolution.

    PubMed

    Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S

    2009-11-01

    Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.

  16. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  17. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  18. Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization

    NASA Astrophysics Data System (ADS)

    Liu, Zexi

    2018-01-01

    Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.

  19. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  20. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  1. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  2. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  3. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  4. A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning, J. G.; Chu, L.; Ren, H. L., E-mail: huilanren@bit.edu.cn

    2014-08-28

    We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. Bymore » analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.« less

  5. The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms

    DOE PAGES

    Shao, Xuan -Min; Lay, Erin Hoffmann

    2016-07-01

    Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and othermore » related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. As a result, the sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.« less

  6. The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Xuan -Min; Lay, Erin Hoffmann

    Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and othermore » related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. As a result, the sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.« less

  7. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality.

    PubMed

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-02-04

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.

  8. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality

    PubMed Central

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-01-01

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320

  9. A Novel Artificial Bee Colony Algorithm Based on Internal-Feedback Strategy for Image Template Matching

    PubMed Central

    Gong, Li-Gang

    2014-01-01

    Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC) algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do. PMID:24892107

  10. Evaluating the Real-time and Offline Performance of the Virtual Seismologist Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.

    2009-04-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4 acceptable picks to be available, and thus are heavily influenced by the station density in a given region; these initial estimate times also include the effects of telemetry delay, which ranges between 6 and 15 seconds at the SCSN, and processing time (~1 second). Other relevant performance statistics include: 95% of initial real-time location estimates are within 20 km of the actual epicenter, 97% of initial real-time magnitude estimates are within one magnitude unit of the network magnitude. Extension of real-time VS operations to networks in Northern California is an on-going effort. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMS). We discuss the performance of the VS algorithm on these datasets in terms of magnitude, location, and ground motion estimation.

  11. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  12. Interpolating between random walks and optimal transportation routes: Flow with multiple sources and targets

    NASA Astrophysics Data System (ADS)

    Guex, Guillaume

    2016-05-01

    In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.

  13. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  14. A probabilistic framework for single-station location of seismicity on Earth and Mars

    NASA Astrophysics Data System (ADS)

    Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.

    2017-01-01

    Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.

  15. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  16. The effect of the Earth's oblate spheroid shape on the accuracy of a time-of-arrival lightning ground strike locating system

    NASA Technical Reports Server (NTRS)

    Casper, Paul W.; Bent, Rodney B.

    1991-01-01

    The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.

  17. Wireless Sensor Network for Radiometric Detection and Assessment of Partial Discharge in High-Voltage Equipment

    NASA Astrophysics Data System (ADS)

    Upton, D. W.; Saeed, B. I.; Mather, P. J.; Lazaridis, P. I.; Vieira, M. F. Q.; Atkinson, R. C.; Tachtatzis, C.; Garcia, M. S.; Judd, M. D.; Glover, I. A.

    2018-03-01

    Monitoring of partial discharge (PD) activity within high-voltage electrical environments is increasingly used for the assessment of insulation condition. Traditional measurement techniques employ technologies that either require off-line installation or have high power consumption and are hence costly. A wireless sensor network is proposed that utilizes only received signal strength to locate areas of PD activity within a high-voltage electricity substation. The network comprises low-power and low-cost radiometric sensor nodes which receive the radiation propagated from a source of PD. Results are reported from several empirical tests performed within a large indoor environment and a substation environment using a network of nine sensor nodes. A portable PD source emulator was placed at multiple locations within the network. Signal strength measured by the nodes is reported via WirelessHART to a data collection hub where it is processed using a location algorithm. The results obtained place the measured location within 2 m of the actual source location.

  18. EMRECORD v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David F.

    2016-07-06

    Program EMRECORD is a utility program designed to facilitate introduction of a 3D electromagnetic (EM) data acquisition configuration (or a “source-receiver recording geometry”) into EM forward modeling algorithms EMHOLE and FDEM. A precise description of the locations (in 3D space), orientations, types, and amplitudes/sensitivities, of all sources and receivers is an essential ingredient for forward modeling of EM wavefields.

  19. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  20. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  1. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  2. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  3. Impact localization in dispersive waveguides based on energy-attenuation of waves with the traveled distance

    NASA Astrophysics Data System (ADS)

    Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo

    2018-05-01

    An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.

  4. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  5. Noise reduction technologies implemented in head-worn preprocessors for improving cochlear implant performance in reverberant noise fields.

    PubMed

    Chung, King; Nelson, Lance; Teske, Melissa

    2012-09-01

    The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessor-eProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0° to 360°. Listeners' speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI + NR), and adaptive directional microphone plus noise reduction algorithm (ADM + NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADM+NR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez

    2011-04-01

    The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.

  7. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  8. New approaches for automatic threedimensional source localization of acoustic emissions--Applications to concrete specimens.

    PubMed

    Kurz, Jochen H

    2015-12-01

    The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  10. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  11. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  12. Localization of non-stationary sources of electromagnetic radiation with the aid of phasometry

    NASA Technical Reports Server (NTRS)

    Mersov, G. A.

    1978-01-01

    The possibility of localizing sources of electromagnetic radiation by measurement of the time of passage of the radiation or the measurement of its phase at various points of cosmic space, at which are located satellite observatories is examined. Algorithms are proposed for localization using two, three, and four astronomical observatories. The precision of the localization and several partial results of practical significance are deduced.

  13. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  14. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-07-04

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  15. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.

  16. Near-Field Noise Source Localization in the Presence of Interference

    NASA Astrophysics Data System (ADS)

    Liang, Guolong; Han, Bo

    In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.

  17. A new statistical PCA-ICA algorithm for location of R-peaks in ECG.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-16

    The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.

  18. All-Direction Random Routing for Source-Location Privacy Protecting against Parasitic Sensor Networks.

    PubMed

    Wang, Na; Zeng, Jiwen

    2017-03-17

    Wireless sensor networks are deployed to monitor the surrounding physical environments and they also act as the physical environments of parasitic sensor networks, whose purpose is analyzing the contextual privacy and obtaining valuable information from the original wireless sensor networks. Recently, contextual privacy issues associated with wireless communication in open spaces have not been thoroughly addressed and one of the most important challenges is protecting the source locations of the valuable packages. In this paper, we design an all-direction random routing algorithm (ARR) for source-location protecting against parasitic sensor networks. For each package, the routing process of ARR is divided into three stages, i.e., selecting a proper agent node, delivering the package to the agent node from the source node, and sending it to the final destination from the agent node. In ARR, the agent nodes are randomly chosen in all directions by the source nodes using only local decisions, rather than knowing the whole topology of the networks. ARR can control the distributions of the routing paths in a very flexible way and it can guarantee that the routing paths with the same source and destination are totally different from each other. Therefore, it is extremely difficult for the parasitic sensor nodes to trace the packages back to the source nodes. Simulation results illustrate that ARR perfectly confuses the parasitic nodes and obviously outperforms traditional routing-based schemes in protecting source-location privacy, with a marginal increase in the communication overhead and energy consumption. In addition, ARR also requires much less energy than the cloud-based source-location privacy protection schemes.

  19. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    PubMed Central

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and  P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557

  20. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    PubMed

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  1. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  2. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  3. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  4. Gas Source Localization via Behaviour Based Mobile Robot and Weighted Arithmetic Mean

    NASA Astrophysics Data System (ADS)

    Yeon, Ahmad Shakaff Ali; Kamarudin, Kamarulzaman; Visvanathan, Retnam; Mamduh Syed Zakaria, Syed Muhammad; Zakaria, Ammar; Munirah Kamarudin, Latifah

    2018-03-01

    This work is concerned with the localization of gas source in dynamic indoor environment using a single mobile robot system. Algorithms such as Braitenberg, Zig-Zag and the combination of the two were implemented on the mobile robot as gas plume searching and tracing behaviours. To calculate the gas source location, a weighted arithmetic mean strategy was used. All experiments were done on an experimental testbed consisting of a large gas sensor array (LGSA) to monitor real-time gas concentration within the testbed. Ethanol gas was released within the testbed and the source location was marked using a pattern that can be tracked by a pattern tracking system. A pattern template was also mounted on the mobile robot to track the trajectory of the mobile robot. Measurements taken by the mobile robot and the LGSA were then compared to verify the experiments. A combined total of 36.5 hours of real time experimental runs were done and the typical results from such experiments were presented in this paper. From the results, we obtained gas source localization errors between 0.4m to 1.2m from the real source location.

  5. Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.

    2010-01-01

    In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.

  6. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  7. Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin

    How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.

  8. Location-allocation models and new solution methodologies in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Dinu, S.; Ciucur, V.

    2016-08-01

    When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.

  9. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Trust index based fault tolerant multiple event localization algorithm for WSNs.

    PubMed

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  11. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    PubMed Central

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  12. Application of near real-time radial semblance to locate the shallow magmatic conduit at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Dawson, P.; Whilldin, D.; Chouet, B.

    2004-01-01

    Radial Semblance is applied to broadband seismic network data to provide source locations of Very-Long-Period (VLP) seismic energy in near real time. With an efficient algorithm and adequate network coverage, accurate source locations of VLP energy are derived to quickly locate the shallow magmatic conduit system at Kilauea Volcano, Hawaii. During a restart in magma flow following a brief pause in the current eruption, the shallow magmatic conduit is pressurized, resulting in elastic radiation from various parts of the conduit system. A steeply dipping distribution of VLP hypocenters outlines a region extending from sea level to about 550 m elevation below and just east of the Halemaumau Pit Crater. The distinct hypocenters suggest the shallow plumbing system beneath Halemaumau consists of a complex plexus of sills and dikes. An unconstrained location for a section of the conduit is also observed beneath the region between Kilauea Caldera and Kilauea Iki Crater.

  13. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  14. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  15. Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska

    USGS Publications Warehouse

    Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.

    2016-01-01

    Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.

  16. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  17. All-Direction Random Routing for Source-Location Privacy Protecting against Parasitic Sensor Networks

    PubMed Central

    Wang, Na; Zeng, Jiwen

    2017-01-01

    Wireless sensor networks are deployed to monitor the surrounding physical environments and they also act as the physical environments of parasitic sensor networks, whose purpose is analyzing the contextual privacy and obtaining valuable information from the original wireless sensor networks. Recently, contextual privacy issues associated with wireless communication in open spaces have not been thoroughly addressed and one of the most important challenges is protecting the source locations of the valuable packages. In this paper, we design an all-direction random routing algorithm (ARR) for source-location protecting against parasitic sensor networks. For each package, the routing process of ARR is divided into three stages, i.e., selecting a proper agent node, delivering the package to the agent node from the source node, and sending it to the final destination from the agent node. In ARR, the agent nodes are randomly chosen in all directions by the source nodes using only local decisions, rather than knowing the whole topology of the networks. ARR can control the distributions of the routing paths in a very flexible way and it can guarantee that the routing paths with the same source and destination are totally different from each other. Therefore, it is extremely difficult for the parasitic sensor nodes to trace the packages back to the source nodes. Simulation results illustrate that ARR perfectly confuses the parasitic nodes and obviously outperforms traditional routing-based schemes in protecting source-location privacy, with a marginal increase in the communication overhead and energy consumption. In addition, ARR also requires much less energy than the cloud-based source-location privacy protection schemes. PMID:28304367

  18. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  19. Toward morphological thoracic EIT: major signal sources correspond to respective organ locations in CT.

    PubMed

    Ferrario, Damien; Grychtol, Bartłomiej; Adler, Andy; Solà, Josep; Böhm, Stephan H; Bodenstein, Marc

    2012-11-01

    Lung and cardiovascular monitoring applications of electrical impedance tomography (EIT) require localization of relevant functional structures or organs of interest within the reconstructed images. We describe an algorithm for automatic detection of heart and lung regions in a time series of EIT images. Using EIT reconstruction based on anatomical models, candidate regions are identified in the frequency domain and image-based classification techniques applied. The algorithm was validated on a set of simultaneously recorded EIT and CT data in pigs. In all cases, identified regions in EIT images corresponded to those manually segmented in the matched CT image. Results demonstrate the ability of EIT technology to reconstruct relevant impedance changes at their anatomical locations, provided that information about the thoracic boundary shape (and electrode positions) are used for reconstruction.

  20. Algae Biofuels Co-Location Assessment Tool for Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-11-29

    The Algae Biofuels Co-Location Assessment Tool for Canada uses chemical stoichiometry to estimate Nitrogen, Phosphorous, and Carbon atom availability from waste water and carbon dioxide emissions streams, and requirements for those same elements to produce a unit of algae. This information is then combined to find limiting nutrient information and estimate potential productivity associated with waste water and carbon dioxide sources. Output is visualized in terms of distributions or spatial locations. Distances are calculated between points of interest in the model using the great circle distance equation, and the smallest distances found by an exhaustive search and sort algorithm.

  1. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  2. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  3. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.

  4. Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.

    2017-12-01

    On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.

  5. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  6. Earthquake source parameters from GPS-measured static displacements with potential for real-time application

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2013-01-01

    We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.

  7. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals

    PubMed Central

    Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126

  8. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals.

    PubMed

    Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.

  9. Sequential Monte Carlo Instant Radiosity.

    PubMed

    Hedman, Peter; Karras, Tero; Lehtinen, Jaakko

    2017-05-01

    Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and to do so in a temporally coherent manner. We address both issues by building, and maintaining over time, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene

    InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less

  11. An Energy-Efficient Target-Tracking Strategy for Mobile Sensor Networks.

    PubMed

    Mahboubi, Hamid; Masoudimansour, Walid; Aghdam, Amir G; Sayrafian-Pour, Kamran

    2017-02-01

    In this paper, an energy-efficient strategy is proposed for tracking a moving target in an environment with obstacles, using a network of mobile sensors. Typically, the most dominant sources of energy consumption in a mobile sensor network are sensing, communication, and movement. The proposed algorithm first divides the field into a grid of sufficiently small cells. The grid is then represented by a graph whose edges are properly weighted to reflect the energy consumption of sensors. The proposed technique searches for near-optimal locations for the sensors in different time instants to route information from the target to destination, using a shortest path algorithm. Simulations confirm the efficacy of the proposed algorithm.

  12. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  13. Near real-time estimation of the seismic source parameters in a compressed domain

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ismael A. Vera

    Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.

  14. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  15. Open-source sea ice drift algorithm for Sentinel-1 SAR imagery using a combination of feature-tracking and pattern-matching

    NASA Astrophysics Data System (ADS)

    Muckenhuber, Stefan; Sandven, Stein

    2017-04-01

    An open-source sea ice drift algorithm for Sentinel-1 SAR imagery is introduced based on the combination of feature-tracking and pattern-matching. A computational efficient feature-tracking algorithm produces an initial drift estimate and limits the search area for the pattern-matching, that provides small to medium scale drift adjustments and normalised cross correlation values as quality measure. The algorithm is designed to utilise the respective advantages of the two approaches and allows drift calculation at user defined locations. The pre-processing of the Sentinel-1 data has been optimised to retrieve a feature distribution that depends less on SAR backscatter peak values. A recommended parameter set for the algorithm has been found using a representative image pair over Fram Strait and 350 manually derived drift vectors as validation. Applying the algorithm with this parameter setting, sea ice drift retrieval with a vector spacing of 8 km on Sentinel-1 images covering 400 km x 400 km, takes less than 3.5 minutes on a standard 2.7 GHz processor with 8 GB memory. For validation, buoy GPS data, collected in 2015 between 15th January and 22nd April and covering an area from 81° N to 83.5° N and 12° E to 27° E, have been compared to calculated drift results from 261 corresponding Sentinel-1 image pairs. We found a logarithmic distribution of the error with a peak at 300 m. All software requirements necessary for applying the presented sea ice drift algorithm are open-source to ensure free implementation and easy distribution.

  16. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  17. Source localization of temporal lobe epilepsy using PCA-LORETA analysis on ictal EEG recordings.

    PubMed

    Stern, Yaki; Neufeld, Miriam Y; Kipervasser, Svetlana; Zilberstein, Amir; Fried, Itzhak; Teicher, Mina; Adi-Japha, Esther

    2009-04-01

    Localizing the source of an epileptic seizure using noninvasive EEG suffers from inaccuracies produced by other generators not related to the epileptic source. The authors isolated the ictal epileptic activity, and applied a source localization algorithm to identify its estimated location. Ten ictal EEG scalp recordings from five different patients were analyzed. The patients were known to have temporal lobe epilepsy with a single epileptic focus that had a concordant MRI lesion. The patients had become seizure-free following partial temporal lobectomy. A midinterval (approximately 5 seconds) period of ictal activity was used for Principal Component Analysis starting at ictal onset. The level of epileptic activity at each electrode (i.e., the eigenvector of the component that manifest epileptic characteristic), was used as an input for low-resolution tomography analysis for EEG inverse solution (Zilberstain et al., 2004). The algorithm accurately and robustly identified the epileptic focus in these patients. Principal component analysis and source localization methods can be used in the future to monitor the progression of an epileptic seizure and its expansion to other areas.

  18. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  19. Pattern-projected schlieren imaging method using a diffractive optics element

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob

    2018-04-01

    We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m  ×  0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.

  20. An Enhanced Method for Scheduling Observations of Large Sky Error Regions for Finding Optical Counterparts to Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, Javed; Singhal, Akshat; Gadre, Bhooshan

    2017-04-01

    The discovery and subsequent study of optical counterparts to transient sources is crucial for their complete astrophysical understanding. Various gamma-ray burst (GRB) detectors, and more notably the ground-based gravitational wave detectors, typically have large uncertainties in the sky positions of detected sources. Searching these large sky regions spanning hundreds of square degrees is a formidable challenge for most ground-based optical telescopes, which can usually image less than tens of square degrees of the sky in a single night. We present algorithms for better scheduling of such follow-up observations in order to maximize the probability of imaging the optical counterpart, basedmore » on the all-sky probability distribution of the source position. We incorporate realistic observing constraints such as the diurnal cycle, telescope pointing limitations, available observing time, and the rising/setting of the target at the observatory’s location. We use simulations to demonstrate that our proposed algorithms outperform the default greedy observing schedule used by many observatories. Our algorithms are applicable for follow-up of other transient sources with large positional uncertainties, such as Fermi -detected GRBs, and can easily be adapted for scheduling radio or space-based X-ray follow-up.« less

  1. Electric field mill network products to improve detection of the lightning hazard

    NASA Technical Reports Server (NTRS)

    Maier, Launa M.

    1987-01-01

    An electric field mill network has been used at Kennedy Space Center for over 10 years as part of the thunderstorm detection system. Several algorithms are currently available to improve the informational output of the electric field mill data. The charge distributions of roughly 50 percent of all lightning can be modeled as if they reduced the charged cloud by a point charge or a point dipole. Using these models, the spatial differences in the lightning induced electric field changes, and a least squares algorithm to obtain an optimum solution, the three-dimensional locations of the lightning charge centers can be located. During the lifetime of a thunderstorm, dynamically induced charging, modeled as a current source, can be located spatially with measurements of Maxwell current density. The electric field mills can be used to calculate the Maxwell current density at times when it is equal to the displacement current density. These improvements will produce more accurate assessments of the potential electrical activity, identify active cells, and forecast thunderstorm termination.

  2. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  3. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.

  4. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  5. Eliminating the influence of source spectrum of white light scanning interferometry through time-delay estimation algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu

    2017-05-01

    In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.

  6. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3. Part 2.

    DTIC Science & Technology

    1983-09-01

    F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through

  7. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  8. Energy to the Edge (E2E) U.S. Army Rapid Equipping Force

    DTIC Science & Technology

    2014-03-21

    generators, parallel multiple sources, prioritize loads, and balance loads. Smart grids are based on complex algorithms and controls. 3. Reduce...stations are not able to be s rviced by prim power because of their location in the middle of a very active airfield and fueling a syst m that c ist

  9. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  10. Identifying Rhodamine Dye Plume Sources in Near-Shore Oceanic Environments by Integration of Chemical and Visual Sensors

    PubMed Central

    Tian, Yu; Kang, Xiaodong; Li, Yunyi; Li, Wei; Zhang, Aiqun; Yu, Jiangchen; Li, Yiping

    2013-01-01

    This article presents a strategy for identifying the source location of a chemical plume in near-shore oceanic environments where the plume is developed under the influence of turbulence, tides and waves. This strategy includes two modules: source declaration (or identification) and source verification embedded in a subsumption architecture. Algorithms for source identification are derived from the moth-inspired plume tracing strategies based on a chemical sensor. The in-water test missions, conducted in November 2002 at San Clemente Island (California, USA) in June 2003 in Duck (North Carolina, USA) and in October 2010 at Dalian Bay (China), successfully identified the source locations after autonomous underwater vehicles tracked the rhodamine dye plumes with a significant meander over 100 meters. The objective of the verification module is to verify the declared plume source using a visual sensor. Because images taken in near shore oceanic environments are very vague and colors in the images are not well-defined, we adopt a fuzzy color extractor to segment the color components and recognize the chemical plume and its source by measuring color similarity. The source verification module is tested by images taken during the CPT missions. PMID:23507823

  11. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  12. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  13. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE PAGES

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...

    2016-06-08

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  14. Digital stroboscopic holographic interferometry for power flow measurements in acoustically driven membranes

    NASA Astrophysics Data System (ADS)

    Keustermans, William; Pires, Felipe; De Greef, Daniël; Vanlanduit, Steve J. A.; Dirckx, Joris J. J.

    2016-06-01

    Despite the importance of the eardrum and the ossicles in the hearing chain, it remains an open question how acoustical energy is transmitted between them. Identifying the transmission path at different frequencies could lead to valuable information for the domain of middle ear surgery. In this work a setup for stroboscopic holography is combined with an algorithm for power flow calculations. With our method we were able to accurately locate the power sources and sinks in a membrane. The setup enabled us to make amplitude maps of the out-of-plane displacement of a vibrating rubber membrane at subsequent instances of time within the vibration period. From these, the amplitude maps of the moments of force and velocities are calculated. The magnitude and phase maps are extracted from this amplitude data, and form the input for the power flow calculations. We present the algorithm used for the measurements and for the power flow calculations. Finite element models of a circular plate with a local energy source and sink allowed us to test and optimize this algorithm in a controlled way and without the present of noise, but will not be discussed below. At the setup an earphone was connected with a thin tube which was placed very close to the membrane so that sound impinges locally on the membrane, hereby acting as a local energy source. The energy sink was a little piece of foam carefully placed against the membrane. The laser pulses are fired at selected instants within the vibration period using a 30 mW HeNe continuous wave laser (red light, 632.8 nm) in combination with an acousto-optic modulator. A function generator controls the phase of these illumination pulses and the holograms are recorded using a CCD camera. We present the magnitude and phase maps as well as the power flow measurements on the rubber membrane. Calculation of the divergence of this power flow map provides a simple and fast way of identifying and locating an energy source or sink. In conclusion possible future improvements to the setup and the power flow algorithm are discussed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  16. Comparison of optimized algorithms in facility location allocation problems with different distance measures

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun

    2017-07-01

    Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.

  17. ALDF Data Retrieval Algorithms for Validating the Optical Transient Detector (OTD) and the Lightning Imaging Sensor (LIS)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    1997-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.

  18. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  19. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  20. Application of an Optimal Search Strategy for the DNAPL Source Identification to a Field Site in Nanjing, China

    NASA Astrophysics Data System (ADS)

    Longting, M.; Ye, S.; Wu, J.

    2014-12-01

    Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.

  1. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    NASA Astrophysics Data System (ADS)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  2. Location of acoustic radiators and inversion for energy density using radio-frequency sources and thunder recordings

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Johnson, J. B.; Arechiga, R. O.; Edens, H. E.; Thomas, R. J.

    2011-12-01

    We use radio frequency (VHF) pulse locations mapped with the New Mexico Tech Lightning Mapping Array (LMA) to study the distribution of thunder sources in lightning channels. A least squares inversion is used to fit channel acoustic energy radiation with broadband (0.01 to 500 Hz) acoustic recordings using microphones deployed local (< 10 km) to the lightning. We model the thunder (acoustic) source as a superposition of line segments connecting the LMA VHF pulses. An optimum branching algorithm is used to reconstruct conductive channels delineated by VHF sources, which we discretize as a superposition of finely-spaced (0.25 m) acoustic point sources. We consider total radiated thunder as a weighted superposition of acoustic waves from individual channels, each with a constant current along its length that is presumed to be proportional to acoustic energy density radiated per unit length. Merged channels are considered as a linear sum of current-carrying branches and radiate proportionally greater acoustic energy. Synthetic energy time series for a given microphone location are calculated for each independent channel. We then use a non-negative least squares inversion to solve for channel energy densities to match the energy time series determined from broadband acoustic recordings across a 4-station microphone network. Events analyzed by this method have so far included 300-1000 VHF sources, and correlations as high as 0.5 between synthetic and recorded thunder energy were obtained, despite the presence of wind noise and 10-30 m uncertainty in VHF source locations.

  3. Experimental characterization of seasonal variations in infrasonic traveltimes on the Korean Peninsula with implications for infrasound event location

    NASA Astrophysics Data System (ADS)

    Che, Il-Young; Stump, Brian W.; Lee, Hee-Il

    2011-04-01

    The dependence of infrasound propagation on the season and path environment was quantified by the analysis of more than 1000 repetitive infrasonic ground-truth events at an active, open-pit mine over two years. Blast-associated infrasonic signals were analysed from two infrasound arrays (CHNAR and ULDAR) located at similar distances of 181 and 169 km, respectively, from the source but in different azimuthal directions and with different path environments. The CHNAR array is located to the NW of the source area with primarily a continental path, whereas the ULDAR is located East of the source with a path dominated by open ocean. As a result, CHNAR observations were dominated by stratospheric phases with characteristic celerities of 260-289 m s-1 and large seasonal variations in the traveltime, whereas data from ULDAR consisted primarily of tropospheric phases with larger celerities from 322 to 361 m s-1 and larger daily than seasonal variation in the traveltime. The interpretation of these observations is verified by ray tracing using atmospheric models incorporating daily weather balloon data that characterizes the shallow atmosphere for the two years of the study. Finally, experimental celerity models that included seasonal path effects were constructed from the long-term data set. These experimental celerity models were used to constrain traveltime variations in infrasonic location algorithms providing improved location estimates as illustrated with the empirical data set.

  4. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  5. Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation

    PubMed Central

    Przemyslaw, Baranski; Pawel, Strumillo

    2012-01-01

    The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321

  6. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  7. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  8. Data for giant constrictors - Biological management profiles and an establishment risk assessment for nine large species of pythons, anacondas, and the boa constrictor

    USGS Publications Warehouse

    Jarnevich, C.S.; Rodda, G.H.; Reed, R.N.

    2011-01-01

    Giant Constrictors' Climate Space The giant constrictors' climate space data set represents the information needed to recreate the climate space and climate matching analyses in Reed and Rodda (2009). A detailed methodology and results are included in that report. The data include locations for nine species of large constrictors including Python molurus, Broghammerus reticulatus, P. sebae, P. natalensis, Boa constrictor, Eunectes notaeus, E. deschauenseei, E. beniensis, and E. murinus. The locations are from published sources. Climate data are included for monthly precipitation and average monthly temperature along with the species locations. The individual spreadsheets of location data match the figures in the Reed and Rodda (2009) report, illustrating areas of the mainland United States that match the climate envelope of the native range. The precipitation and temperature data at each location were used to determine the climate space for each species. Graphs of climate space formed the basis for the algorithms in the data set, and more details can be found in Reed and Rodda (2009). These algorithms were used in ArcGIS to generate maps of areas in the United States that matched the climate space of locations of the snakes in their native range. We discovered a rounding error in ArcGIS in the implementation of the algorithms, which has been corrected here. Therefore the shapefiles are slightly different than those appearing in the risk assessment figures illustrating areas of the United States that match the climate envelope of the species in their native ranges. However, the suitable localities are not different at the scale of intended use for these maps, although there are more noticeable differences between areas classified as 'too cold' and 'too hot'.

  9. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  10. Water Mapping Technology Rebuilds Lives in Arid Regions

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Using NASA Landsat satellite and other remote sensing topographical data, Radar Technologies International developed an algorithm-based software program that can locate underground water sources. Working with international organizations and governments, the firm, which maintains an office in New Braunfels, Texas, is helping to provide water for refugees and other people in drought-stricken regions such as Kenya, Sudan, and Afghanistan.

  11. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less

  12. Downhole Microseismic Monitoring at a Carbon Capture, Utilization, and Storage Site, Farnsworth Unit, Ochiltree County, Texas

    NASA Astrophysics Data System (ADS)

    Ziegler, A.; Balch, R. S.; van Wijk, J.

    2015-12-01

    Farnsworth Oil Field in North Texas hosts an ongoing carbon capture, utilization, and storage project. This study is focused on passive seismic monitoring at the carbon injection site to measure, locate, and catalog any induced seismic events. A Geometrics Geode system is being utilized for continuous recording of the passive seismic downhole bore array in a monitoring well. The array consists of 3-component dual Geospace OMNI-2400 15Hz geophones with a vertical spacing of 30.5m. Downhole temperature and pressure are also monitored. Seismic data is recorded continuously and is produced at a rate of over 900GB per month, which must be archived and reviewed. A Short Term Average/Long Term Average (STA/LTA) algorithm was evaluated for its ability to search for events, including identification and quantification of any false positive events. It was determined that the algorithm was not appropriate for event detection with the background level of noise at the field site and for the recording equipment as configured. Alternatives are being investigated. The final intended outcome of the passive seismic monitoring is to mine the continuous database and develop a catalog of microseismic events/locations and to determine if there is any relationship to CO2 injection in the field. Identifying the location of any microseismic events will allow for correlation with carbon injection locations and previously characterized geological and structural features such as faults and paleoslopes. Additionally, the borehole array has recorded over 1200 active sources with three sweeps at each source location that were acquired during a nearby 3D VSP. These data were evaluated for their usability and location within an effective radius of the array and were stacked to improve signal-noise ratio and are used to calibrate a full field velocity model to enhance event location accuracy. Funding for this project is provided by the U.S. Department of Energy under Award No. DE-FC26-05NT42591.

  13. Updating finite element dynamic models using an element-by-element sensitivity methodology

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Hemez, Francois M.

    1993-01-01

    A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.

  14. Atmospheric Modeling of Mars Methane Plumes

    NASA Astrophysics Data System (ADS)

    Mischna, Michael A.; Allen, M.; Lee, S.

    2010-10-01

    We present two complementary methods for isolating and modeling surface source releases of methane in the martian atmosphere. From recent observations, there is strong evidence that periodic releases of methane occur from discrete surface locations, although the exact location and mechanism of release is still unknown. Numerical model simulations with the Mars Weather Research and Forecasting (MarsWRF) general circulation model (GCM) have been applied to the ground-based observations of atmospheric methane by Mumma et al., (2009). MarsWRF simulations reproduce the natural behavior of trace gas plumes in the martian atmosphere, and reveal the development of the plume over time. These results provide constraints on the timing and location of release of the methane plume. Additional detections of methane have been accumulated by the Planetary Fourier Spectrometer (PFS) on board Mars Express. For orbital observations, which generally have higher frequency and resolution, an alternate approach to source isolation has been developed. Drawing from the concept of natural selection within biology, we apply an evolutionary computational model to this problem of isolating source locations. Using genetic algorithms that `reward’ best-fit matches between observations and GCM plume simulations (also from MarsWRF) over many generations, we find that we can potentially isolate source locations to within tens of km, which is within the roving capabilities of future Mars rovers. Together, these methods present viable numerical approaches to restricting the timing, duration and size of methane release events, and can be used for other trace gas plumes on Mars as well as elsewhere in the solar system.

  15. LLNL Location and Detection Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, S C; Harris, D B; Anderson, M L

    2003-07-16

    We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less

  16. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  17. Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an

    2018-03-01

    Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.

  18. Report on GMI Special Study #15: Radio Frequency Interference

    NASA Technical Reports Server (NTRS)

    Draper, David W.

    2015-01-01

    This report contains the results of GMI special study #15. An analysis is conducted to identify sources of radio frequency interference (RFI) to the Global Precipitation Measurement (GPM) Microwave Imager (GMI). The RFI impacts the 10 GHz and 18 GHz channels at both polarities. The sources of RFI are identified for the following conditions: over the water (including major inland water bodies) in the earth view, and over land in the earth view, and in the cold sky view. A best effort is made to identify RFI sources in coastal regions, with noted degradation of flagging performance due to the highly variable earth scene over coastal regions. A database is developed of such sources, including latitude, longitude, country and city of earth emitters, and position in geosynchronous orbit for space emitters. A description of the recommended approach for identifying the sources and locations of RFI in the GMI channels is given in this paper. An algorithm to flag RFI contaminated pixels which can be incorporated into the GMI Level 1Base/1B algorithms is defined, which includes Matlab code to perform the necessary flagging of RFI. A Matlab version of the code is delivered with this distribution.

  19. Lightning Location Using Acoustic Signals

    NASA Astrophysics Data System (ADS)

    Badillo, E.; Arechiga, R. O.; Thomas, R. J.

    2013-05-01

    In the summer of 2011 and 2012 a network of acoustic arrays was deployed in the Magdalena mountains of central New Mexico to locate lightning flashes. A Times-Correlation (TC) ray-tracing-based-technique was developed in order to obtain the location of lightning flashes near the network. The TC technique, locates acoustic sources from lightning. It was developed to complement the lightning location of RF sources detected by the Lightning Mapping Array (LMA) developed at Langmuir Laboratory, in New Mexico Tech. The network consisted of four arrays with four microphones each. The microphones on each array were placed in a triangular configuration with one of the microphones in the center of the array. The distance between the central microphone and the rest of them was about 30 m. The distance between centers of the arrays ranged from 500 m to 1500 m. The TC technique uses times of arrival (TOA) of acoustic waves to trace back the location of thunder sources. In order to obtain the times of arrival, the signals were filtered in a frequency band of 2 to 20 hertz and cross-correlated. Once the times of arrival were obtained, the Levenberg-Marquardt algorithm was applied to locate the spatial coordinates (x,y, and z) of thunder sources. Two techniques were used and contrasted to compute the accuracy of the TC method: Nearest-Neighbors (NN), between acoustic and LMA located sources, and standard deviation from the curvature matrix of the system as a measure of dispersion of the results. For the best case scenario, a triggered lightning event, the TC method applied with four microphones, located sources with a median error of 152 m and 142.9 m using nearest-neighbors and standard deviation respectively.; Results of the TC method in the lightning event recorded at 18:47:35 UTC, August 6, 2012. Black dots represent the results computed. Light color dots represent the LMA data for the same event. The results were obtained with the MGTM station (four channels). This figure shows a map of Altitude vs Longitude (in km).

  20. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    PubMed Central

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  1. A geo-computational algorithm for exploring the structure of diffusion progression in time and space.

    PubMed

    Chin, Wei-Chien-Benny; Wen, Tzai-Hung; Sabel, Clive E; Wang, I-Hsiang

    2017-10-03

    A diffusion process can be considered as the movement of linked events through space and time. Therefore, space-time locations of events are key to identify any diffusion process. However, previous clustering analysis methods have focused only on space-time proximity characteristics, neglecting the temporal lag of the movement of events. We argue that the temporal lag between events is a key to understand the process of diffusion movement. Using the temporal lag could help to clarify the types of close relationships. This study aims to develop a data exploration algorithm, namely the TrAcking Progression In Time And Space (TaPiTaS) algorithm, for understanding diffusion processes. Based on the spatial distance and temporal interval between cases, TaPiTaS detects sub-clusters, a group of events that have high probability of having common sources, identifies progression links, the relationships between sub-clusters, and tracks progression chains, the connected components of sub-clusters. Dengue Fever cases data was used as an illustrative case study. The location and temporal range of sub-clusters are presented, along with the progression links. TaPiTaS algorithm contributes a more detailed and in-depth understanding of the development of progression chains, namely the geographic diffusion process.

  2. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  3. Active Electro-Location of Objects in the Underwater Environment Based on the Mixed Polarization Multiple Signal Classification Algorithm

    PubMed Central

    Guo, Lili; Qi, Junwei; Xue, Wei

    2018-01-01

    This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495

  4. Passive coherent location system simulation and evaluation

    NASA Astrophysics Data System (ADS)

    Slezák, Libor; Kvasnička, Michael; Pelant, Martin; Vávra, Jiř; Plšek, Radek

    2006-02-01

    Passive Coherent Location (PCL) is going to be important and perspective system of passive location of non cooperative and stealth targets. It works with the sources of irradiation of opportunity. PCL is intended to be a part of mobile Air Command and Control System (ACCS) as a Deployable ACCS Component (DAC). The company ERA works on PCL system parameters verification program by complete PCL simulator development since the year 2003. The Czech DoD takes financial participation on this program. The moving targets scenario, the RCS calculation by method of moment, ground clutter scattering and signal processing method (the bottle neck of the PCL) are available up to now in simulator tool. The digital signal (DSP) processing algorithms are performed both on simulated data and on real data measured at NATO C3 Agency in their Haag experiment. The Institute of Information Theory and Automation of the Academy of Sciences of the Czech Republic takes part on the implementation of the DSP algorithms in FPGA. The paper describes the simulator and signal processing structure and results both on simulated and measured data.

  5. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  6. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  7. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  8. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  9. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Shang Yu; Babak, Stanislav

    Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2more » and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.« less

  11. Development of new tsunami detection algorithms for high frequency radars and application to tsunami warning in British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Grilli, S. T.; Guérin, C. A.; Shelby, M. R.; Grilli, A. R.; Insua, T. L.; Moran, P., Jr.

    2016-12-01

    A High-Frequency (HF) radar was installed by Ocean Networks Canada in Tofino, BC, to detect tsunamis from far- and near-field seismic sources; in particular, from the Cascadia Subduction Zone. This HF radar can measure ocean surface currents up to a 70-85 km range, depending on atmospheric conditions, based on the Doppler shift they cause in ocean waves at the Bragg frequency. In earlier work, we showed that tsunami currents must be at least 0.15 m/s to be directly detectable by a HF radar, when considering environmental noise and background currents (from tide/mesoscale circulation). This limits a direct tsunami detection to shallow water areas where currents are sufficiently strong due to wave shoaling and, hence, to the continental shelf. It follows that, in locations with a narrow shelf, warning times using a direct inversion method will be small. To detect tsunamis in deeper water, beyond the continental shelf, we proposed a new algorithm that does not require directly inverting currents, but instead is based on observing changes in patterns of spatial correlations of the raw radar signal between two radar cells located along the same wave ray, after time is shifted by the tsunami propagation time along the ray. A pattern change will indicate the presence of a tsunami. We validated this new algorithm for idealized tsunami wave trains propagating over a simple seafloor geometry in a direction normally incident to shore. Here, we further develop, extend, and validate the algorithm for realistic case studies of seismic tsunami sources impacting Vancouver Island, BC. Tsunami currents, computed with a state-of-the-art long wave model are spatially averaged over cells aligned along individual wave rays, located within the radar sweep area, obtained by solving the wave geometric optic equation; for long waves, such rays and tsunami propagation times along those are only function of the seafloor bathymetry, and hence can be precalculated for different incident tsunami directions. A model simulating the radar backscattered signal in space and time as a function of simulated tsunami currents is applied to the sweep area. Numerical experiments show that the new algorithm can detect a realistic tsunami further offshore than a direct detection method. Correlation thresholds for tsunami detection will be derived from the results.

  12. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabrera-Palmer, Belkis

    Predicting the performance of radiation detection systems at field sites based on measured performance acquired under controlled conditions at test locations, e.g., the Nevada National Security Site (NNSS), remains an unsolved and standing issue within DNDO’s testing methodology. Detector performance can be defined in terms of the system’s ability to detect and/or identify a given source or set of sources, and depends on the signal generated by the detector for the given measurement configuration (i.e., source strength, distance, time, surrounding materials, etc.) and on the quality of the detection algorithm. Detector performance is usually evaluated in the performance and operationalmore » testing phases, where the measurement configurations are selected to represent radiation source and background configurations of interest to security applications.« less

  14. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  15. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  16. West Texas array experiment: Noise and source characterization of short-range infrasound and acoustic signals, along with lab and field evaluation of Intermountain Laboratories infrasound microphones

    NASA Astrophysics Data System (ADS)

    Fisher, Aileen

    The term infrasound describes atmospheric sound waves with frequencies below 20 Hz, while acoustics are classified within the audible range of 20 Hz to 20 kHz. Infrasound and acoustic monitoring in the scientific community is hampered by low signal-to-noise ratios and a limited number of studies on regional and short-range noise and source characterization. The JASON Report (2005) suggests the infrasound community focus on more broad-frequency, observational studies within a tactical distance of 10 km. In keeping with that recommendation, this paper presents a study of regional and short-range atmospheric acoustic and infrasonic noise characterization, at a desert site in West Texas, covering a broad frequency range of 0.2 to 100 Hz. To spatially sample the band, a large number of infrasound gauges was needed. A laboratory instrument analysis is presented of the set of low-cost infrasound sensors used in this study, manufactured by Inter-Mountain Laboratories (IML). Analysis includes spectra, transfer functions and coherences to assess the stability and range of the gauges, and complements additional instrument testing by Sandia National Laboratories. The IMLs documented here have been found reliably coherent from 0.1 to 7 Hz without instrument correction. Corrections were built using corresponding time series from the commercially available and more expensive Chaparral infrasound gauge, so that the corrected IML outputs were able to closely mimic the Chaparral output. Arrays of gauges are needed for atmospheric sound signal processing. Our West Texas experiment consisted of a 1.5 km aperture, 23-gauge infrasound/acoustic array of IMLs, with a compact, 12 m diameter grid-array of rented IMLs at the center. To optimize signal recording, signal-to-noise ratio needs to be quantified with respect to both frequency band and coherence length. The higher-frequency grid array consisted of 25 microphones arranged in a five by five pattern with 3 meter spacing, without spatial wind noise filtering hoses or pipes. The grid was within the distance limits of a single gauge's normal hose array, and data were used to perform a spatial noise correlation study. The highest correlation values were not found in the lower frequencies as anticipated, owing to a lack of sources in the lower range and the uncorrelated nature of wind noise. The highest values, with cross-correlation averages between 0.4 and 0.7 from 3 to 17 m between gauges, were found at night from 10 and 20 Hz due to a continuous local noise source and low wind. Data from the larger array were used to identify continuous and impulsive signals in the area that comprise the ambient noise field. Ground truth infrasound and acoustic, time and location data were taken for a highway site, a wind farm, and a natural gas compressor. Close-range sound data were taken with a single IML "traveler" gauge. Spectrograms and spectrum peaks were used to identify their source signatures. Two regional location techniques were also tested with data from the large array by using a propane cannon as a controlled, impulsive source. A comparison is presented of the Multiple Signal Classification Algorithm (MUSIC) to a simple, quadratic, circular wavefront algorithm. MUSIC was unable to effectively separate noise and source eignenvalues and eigenvectors due to spatial aliasing of the propane cannon signal and a lack of incoherent noise. Only 33 out of 80 usable shots were located by MUSIC within 100 m. Future work with the algorithm should focus on location of impulsive and continuous signals with development of methods for accurate separation of signal and noise eigenvectors in the presence of coherent noise and possible spatial aliasing. The circular wavefront algorithm performed better with our specific dataset and successfully located 70 out of 80 propane cannon shots within 100 m of the original location, 66 of which were within 20 m. This method has low computation requirements, making it well suited for real-time automated processing and smaller computers. Future research could focus on development of the method for an automated system and statistical impulsive noise filtering for higher accuracy.

  17. Event Discrimination Using Seismoacoustic Catalog Probabilities

    NASA Astrophysics Data System (ADS)

    Albert, S.; Arrowsmith, S.; Bowman, D.; Downey, N.; Koch, C.

    2017-12-01

    Presented here are three seismoacoustic catalogs from various years and locations throughout Utah and New Mexico. To create these catalogs, we combine seismic and acoustic events detected and located using different algorithms. Seismoacoustic events are formed based on similarity of origin time and location. Following seismoacoustic fusion, the data is compared against ground truth events. Each catalog contains events originating from both natural and anthropogenic sources. By creating these seismoacoustic catalogs, we show that the fusion of seismic and acoustic data leads to a better understanding of the nature of individual events. The probability of an event being a surface blast given its presence in each seismoacoustic catalog is quantified. We use these probabilities to discriminate between events from natural and anthropogenic sources. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  18. Fine-tuning satellite-based rainfall estimates

    NASA Astrophysics Data System (ADS)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  19. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to determine. We use one of the largest aftershocks that can be located by conventional methods as our reference event to calibrate the controlling parameters of ISSA. These parameters include the overall Vp/Vs ratio (because a precise S velocity model was unavailable), the length of scanning time window, and the weighting factor for each station. Our results show that ISSA is not only more efficient in locating earthquake clusters/aftershocks, but also capable of identifying many events missed by conventional phase-picking methods.

  20. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  1. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  2. U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly

    2008-04-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.

  3. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts

    PubMed Central

    PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS

    2017-01-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876

  4. Cluster Based Location-Aided Routing Protocol for Large Scale Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Dong, Liang; Liang, Taotao; Yang, Xinyu; Zhang, Deyun

    Routing algorithms with low overhead, stable link and independence of the total number of nodes in the network are essential for the design and operation of the large-scale wireless mobile ad hoc networks (MANET). In this paper, we develop and analyze the Cluster Based Location-Aided Routing Protocol for MANET (C-LAR), a scalable and effective routing algorithm for MANET. C-LAR runs on top of an adaptive cluster cover of the MANET, which can be created and maintained using, for instance, the weight-based distributed algorithm. This algorithm takes into consideration the node degree, mobility, relative distance, battery power and link stability of mobile nodes. The hierarchical structure stabilizes the end-to-end communication paths and improves the networks' scalability such that the routing overhead does not become tremendous in large scale MANET. The clusterheads form a connected virtual backbone in the network, determine the network's topology and stability, and provide an efficient approach to minimizing the flooding traffic during route discovery and speeding up this process as well. Furthermore, it is fascinating and important to investigate how to control the total number of nodes participating in a routing establishment process so as to improve the network layer performance of MANET. C-LAR is to use geographical location information provided by Global Position System to assist routing. The location information of destination node is used to predict a smaller rectangle, isosceles triangle, or circle request zone, which is selected according to the relative location of the source and the destination, that covers the estimated region in which the destination may be located. Thus, instead of searching the route in the entire network blindly, C-LAR confines the route searching space into a much smaller estimated range. Simulation results have shown that C-LAR outperforms other protocols significantly in route set up time, routing overhead, mean delay and packet collision, and simultaneously maintains low average end-to-end delay, high success delivery ratio, low control overhead, as well as low route discovery frequency.

  5. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  6. Improving automatic earthquake locations in subduction zones: a case study for GEOFON catalog of Tonga-Fiji region

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten

    2015-04-01

    Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.

  7. BDW-1

    EPA Pesticide Factsheets

    This data set is associated with the results found in the journal article: Perry et al, 2016. Characterization of pollutant dispersion near elongated buildings based on wind tunnel simulations, Atmospheric Environment, 142, 286-295.The paper presents a wind tunnel study of the effects of elongated rectangular buildings on the dispersion of pollutants from nearby stacks. The study examines the influence of source location, building aspect ratio, and wind direction on pollutant dispersion with the goal of developing improved algorithms within dispersion models. The paper also examines the current AERMOD/PRIME modeling capabilities compared to wind tunnel observations. Differences in the amount of plume material entrained in the wake region downwind of a building for various source locations and source heights are illustrated with vertical and lateral concentration profiles. These profiles were parameterized using the Gaussian equation and show the influence of building/source configurations on those parameters. When the building is oriented at 4500b0 to the approach flow, for example, the effective plume height descends more rapidly than it does for a perpendicular building, enhancing the resulting surface concentrations in the wake region. Buildings at angles to the wind cause a cross-wind shift in the location of the plume resulting from a lateral mean flow established in the building wake. These and other effects that are not well represented in many dispersio

  8. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    PubMed

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  9. Hailstorms over Switzerland: Verification of Crowd-sourced Data

    NASA Astrophysics Data System (ADS)

    Noti, Pascal-Andreas; Martynov, Andrey; Hering, Alessandro; Martius, Olivia

    2016-04-01

    The reports of smartphone users, witnessing hailstorms, can be used as source of independent, ground-based observation data on ground-reaching hailstorms with high temporal and spatial resolution. The presented work focuses on the verification of crowd-sourced data collected over Switzerland with the help of a smartphone application recently developed by MeteoSwiss. The precise location, time of hail precipitation and the hailstone size are included in the crowd-sourced data, assessed on the basis of the weather radar data of MeteoSwiss. Two radar-based hail detection algorithms, POH (Probability of Hail) and MESHS (Maximum Expected Severe Hail Size), in use at MeteoSwiss are confronted with the crowd-sourced data. The available data and investigation time period last from June to August 2015. Filter criteria have been applied in order to remove false reports from the crowd-sourced data. Neighborhood methods have been introduced to reduce the uncertainties which result from spatial and temporal biases. The crowd-sourced and radar data are converted into binary sequences according to previously set thresholds, allowing for using a categorical verification. Verification scores (e.g. hit rate) are then calculated from a 2x2 contingency table. The hail reporting activity and patterns corresponding to "hail" and "no hail" reports, sent from smartphones, have been analyzed. The relationship between the reported hailstone sizes and both radar-based hail detection algorithms have been investigated.

  10. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  11. Self characterization of a coded aperture array for neutron source imaging

    DOE PAGES

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less

  12. Impact source localisation in aerospace composite structures

    NASA Astrophysics Data System (ADS)

    De Simone, Mario Emanuele; Ciampa, Francesco; Boccardi, Salvatore; Meo, Michele

    2017-12-01

    The most commonly encountered type of damage in aircraft composite structures is caused by low-velocity impacts due to foreign objects such as hail stones, tool drops and bird strikes. Often these events can cause severe internal material damage that is difficult to detect and may lead to a significant reduction of the structure’s strength and fatigue life. For this reason there is an urgent need to develop structural health monitoring systems able to localise low-velocity impacts in both metallic and composite components as they occur. This article proposes a novel monitoring system for impact localisation in aluminium and composite structures, which is able to determine the impact location in real-time without a-priori knowledge of the mechanical properties of the material. This method relies on an optimal configuration of receiving sensors, which allows linearization of well-known nonlinear systems of equations for the estimation of the impact location. The proposed algorithm is based on the time of arrival identification of the elastic waves generated by the impact source using the Akaike Information Criterion. The proposed approach was demonstrated successfully on both isotropic and orthotropic materials by using a network of closely spaced surface-bonded piezoelectric transducers. The results obtained show the validity of the proposed algorithm, since the impact sources were detected with a high level of accuracy. The proposed impact detection system overcomes current limitations of other methods and can be retrofitted easily on existing aerospace structures allowing timely detection of an impact event.

  13. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.

  14. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    NASA Astrophysics Data System (ADS)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  15. Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    DTIC Science & Technology

    2007-09-01

    March 17, 2005. The seismic signals from both master and detected events are followed by infrasound arrivals. Note the long duration of the...correlation coefficient traces with a significant array -gain. A detected event that is co-located with the master event will record the same time-difference...estimating the detection threshold reduction for a range of highly repeating seismic sources using arrays of different configurations and at different

  16. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  17. Power system monitoring and source control of the Space Station Freedom DC power system testbed

    NASA Technical Reports Server (NTRS)

    Kimnach, Greg L.; Baez, Anastacio N.

    1992-01-01

    Unlike a terrestrial electric utility which can purchase power from a neighboring utility, the Space Station Freedom (SSF) has strictly limited energy resources; as a result, source control, system monitoring, system protection, and load management are essential to the safe and efficient operation of the SSF Electric Power System (EPS). These functions are being evaluated in the DC Power Management and Distribution (PMAD) Testbed which NASA LeRC has developed at the Power System Facility (PSF) located in Cleveland, Ohio. The testbed is an ideal platform to develop, integrate, and verify power system monitoring and control algorithms. State Estimation (SE) is a monitoring tool used extensively in terrestrial electric utilities to ensure safe power system operation. It uses redundant system information to calculate the actual state of the EPS, to isolate faulty sensors, to determine source operating points, to verify faults detected by subsidiary controllers, and to identify high impedance faults. Source control and monitoring safeguard the power generation and storage subsystems and ensure that the power system operates within safe limits while satisfying user demands with minimal interruptions. System monitoring functions, in coordination with hardware implemented schemes, provide for a complete fault protection system. The objective of this paper is to overview the development and integration of the state estimator and the source control algorithms.

  18. Power system monitoring and source control of the Space Station Freedom dc-power system testbed

    NASA Technical Reports Server (NTRS)

    Kimnach, Greg L.; Baez, Anastacio N.

    1992-01-01

    Unlike a terrestrial electric utility which can purchase power from a neighboring utility, the Space Station Freedom (SSF) has strictly limited energy resources; as a result, source control, system monitoring, system protection, and load management are essential to the safe and efficient operation of the SSF Electric Power System (EPS). These functions are being evaluated in the dc Power Management and Distribution (PMAD) Testbed which NASA LeRC has developed at the Power System Facility (PSF) located in Cleveland, Ohio. The testbed is an ideal platform to develop, integrate, and verify power system monitoring and control algorithms. State Estimation (SE) is a monitoring tool used extensively in terrestrial electric utilities to ensure safe power system operation. It uses redundant system information to calculate the actual state of the EPS, to isolate faulty sensors, to determine source operating points, to verify faults detected by subsidiary controllers, and to identify high impedance faults. Source control and monitoring safeguard the power generation and storage subsystems and ensure that the power system operates within safe limits while satisfying user demands with minimal interruptions. System monitoring functions, in coordination with hardware implemented schemes, provide for a complete fault protection system. The objective of this paper is to overview the development and integration of the state estimator and the source control algorithms.

  19. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  20. Enhancing weak transient signals in SEVIRI false color imagery: Application to dust source detection in southern Africa

    NASA Astrophysics Data System (ADS)

    Murray, J. E.; Brindley, H. E.; Bryant, R. G.; Russell, J. E.; Jenkins, K. F.; Washington, R.

    2016-09-01

    A method is described to significantly enhance the signature of dust events using observations from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with either (a) low levels of dust emission or (b) dust emissions with high salt or low quartz content. Different channel combinations, of the differenced data from the steps above, are then rendered in false color imagery for the purpose of improved identification of dust source locations and activity. We have applied this clear-sky difference (CSD) algorithm over three (globally significant) source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case study analyses indicate three notable advantages associated with the CSD approach over established image rendering methods: (i) an improved ability to detect dust plumes, (ii) the observation of source activation earlier in the diurnal cycle, and (iii) an improved ability to resolve and pinpoint dust plume source locations.

  1. FInd Gas Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Travis, Bryan; Sauer, Jeremy; Dubey, Manvendra

    2017-02-24

    FIGS is a neural network software that ingests real time synchronized field data on environmental flow fields and turbulence and gas concentration variations at high frequency and uses an error minimization algorithm to locate the gas source and quantify its strength. The software can be interfaced with atmospheric, oceanic and subsurface instruments in a variety of platforms stationary or mobile (e.g. cars, UAVs, submersible vehicles or boreholes) and used to find gas sources by smart use of data and phenomenology. FIGS can be trained by phenomenological model of the flow fields in the environment of interest and/or be calibrated bymore » controlled release. After initial deployment the FIGS learning will grow with time as it accumulates data on source quantification. FIGS can be installed on any computer from small beagle-bones for field deployment/end-use to PC/MACs/main-frame for training/analysis. FIGS has been trained (using LANL's high resolution atmospheric simulations) and calibrated, tested and evaluated in the field and shown to perform well in finding and quantifying methane leaks at 10-100m scales at well pads by ingesting atmospheric measurements. The code is applicable to gas and particle source location at large scales.« less

  2. Continuous wavelet transform and Euler deconvolution method and their application to magnetic field data of Jharia coalfield, India

    NASA Astrophysics Data System (ADS)

    Singh, Arvind; Singh, Upendra Kumar

    2017-02-01

    This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.

  3. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  4. Small hydropower spot prediction using SWAT and a diversion algorithm, case study: Upper Citarum Basin

    NASA Astrophysics Data System (ADS)

    Kardhana, Hadi; Arya, Doni Khaira; Hadihardaja, Iwan K.; Widyaningtyas, Riawan, Edi; Lubis, Atika

    2017-11-01

    Small-Scale Hydropower (SHP) had been important electric energy power source in Indonesia. Indonesia is vast countries, consists of more than 17.000 islands. It has large fresh water resource about 3 m of rainfall and 2 m of runoff. Much of its topography is mountainous, remote but abundant with potential energy. Millions of people do not have sufficient access to electricity, some live in the remote places. Recently, SHP development was encouraged for energy supply of the places. Development of global hydrology data provides opportunity to predict distribution of hydropower potential. In this paper, we demonstrate run-of-river type SHP spot prediction tool using SWAT and a river diversion algorithm. The use of Soil and Water Assessment Tool (SWAT) with input of CFSR (Climate Forecast System Re-analysis) of 10 years period had been implemented to predict spatially distributed flow cumulative distribution function (CDF). A simple algorithm to maximize potential head of a location by a river diversion expressing head race and penstock had been applied. Firm flow and power of the SHP were estimated from the CDF and the algorithm. The tool applied to Upper Citarum River Basin and three out of four existing hydropower locations had been well predicted. The result implies that this tool is able to support acceleration of SHP development at earlier phase.

  5. Control algorithms for dynamic windows for residential buildings

    DOE PAGES

    Firlag, Szymon; Yazdanian, Mehrangiz; Curcija, Charlie; ...

    2015-09-30

    This study analyzes the influence of control algorithms for dynamic windows on energy consumption, number of hours of retracted shades during daylight and shade operations. Five different control algorithms - heating/cooling, simple rules, perfect citizen, heat flow and predictive weather were developed and compared. The performance of a typical residential building was modeled with EnergyPlus. The program Widow was used to generate a Bi-Directional Distribution Function (BSDF) for two window configurations. The BSDF was exported to EnergyPlus using the IDF file format. The EMS feature in EnergyPlus was used to develop custom control algorithms. The calculations were made for fourmore » locations with diverse climate. The results showed that: (a) use of automated shading with proposed control algorithms can reduce the site energy in the range of 11.6-13.0%; in regard to source (primary) energy in the range of 20.1-21.6%, (b) the differences between algorithms in regard to energy savings are not high, (c) the differences between algorithms in regard to number of hours of retracted shades are visible, (e) the control algorithms have a strong influence on shade operation and oscillation of shade can occur, (d) additional energy consumption caused by motor, sensors and a small microprocessor in the analyzed case is very small.« less

  6. Development of an Automatic Detection Program of Halo CMEs

    NASA Astrophysics Data System (ADS)

    Choi, K.; Park, M. Y.; Kim, J.

    2017-12-01

    The front-side halo CMEs are the major cause for large geomagnetic storms. Halo CMEs can result in damage to satellites, communication, electrical transmission lines and power systems. Thus automated techniques for detecting and analysing Halo CMEs from coronagraph data are of ever increasing importance for space weather monitoring and forecasting. In this study, we developed the algorithm that can automatically detect and do image processing the Halo CMEs in the images from the LASCO C3 coronagraph on board the SOHO spacecraft. With the detection algorithm, we derived the geometric and kinematical parameters of halo CMEs, such as source location, width, actual CME speed and arrival time at 21.5 solar radii.

  7. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  8. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    PubMed Central

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  9. Essays on Infrastructure Design and Planning for Clean Energy Systems

    NASA Astrophysics Data System (ADS)

    Kocaman, Ayse Selin

    The International Energy Agency estimates that the number of people who do not have access to electricity is nearly 1.3 billion and a billion more have only unreliable and intermittent supply. Moreover, current supply for electricity generation mostly relies on fossil fuels, which are finite and one of the greatest threats to the environment. Rising population growth rates, depleting fuel sources, environmental issues and economic developments have increased the need for mathematical optimization to provide a formal framework that enables systematic and clear decision-making in energy operations. This thesis through its methodologies and algorithms enable tools for energy generation, transmission and distribution system design and help policy makers make cost assessments in energy infrastructure planning rapidly and accurately. In Chapter 2, we focus on local-level power distribution systems planning for rural electrification using techniques from combinatorial optimization. We describe a heuristic algorithm that provides a quick solution for the partial electrification problem where the distribution network can only connect a pre-specified number of households with low voltage lines. The algorithm demonstrates the effect of household settlement patterns on the electrification cost. We also describe the first heuristic algorithm that selects the locations and service areas of transformers without requiring candidate solutions and simultaneously builds a two-level grid network in a green-field setting. The algorithms are applied to real world rural settings in Africa, where household locations digitized from satellite imagery are prescribed. In Chapter 3 and 4, we focus on power generation and transmission using clean energy sources. Here, we imagine a country in the future where hydro and solar are the dominant sources and fossil fuels are only available in minimal form. We discuss the problem of modeling hydro and solar energy production and allocation, including long-term investments and storage, capturing the stochastic nature of hourly supply and demand data. We mathematically model two hybrid energy generation and allocation systems where time variability of energy sources and demand is balanced using the water stored in the reservoirs. In Chapter 3, we use conventional hydro power stations (incoming stream flows are stored in large dams and water release is deferred until it is needed) and in Chapter 4, we use pumped hydro stations (water is pumped from lower reservoir to upper reservoir during periods of low demand to be released for generation when demand is high). Aim of the models is to determine optimal sizing of infrastructure needed to match demand and supply in a most reliable and cost effective way. An innovative contribution of this work is the establishment of a new perspective to energy modeling by including fine-grained sources of uncertainty such as stream flow and solar radiations in hourly level as well as spatial location of supply and demand and transmission network in national level. In addition, we compare the conventional and the pumped hydro power systems in terms of reliability and cost efficiency and quantitatively show the improvement provided by including pumped hydro storage. The model will be presented with a case study of India and helps to answer whether solar energy in addition to hydro power potential in Himalaya Mountains would be enough to meet growing electricity demand if fossil fuels could be almost completely phased out from electricity generation.

  10. Martian methane plume models for defining Mars rover methane source search strategies

    NASA Astrophysics Data System (ADS)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  11. BLAYER User Guide

    NASA Technical Reports Server (NTRS)

    Saunders, David A.; Prabhu, Dinesh K.

    2018-01-01

    A software utility employed for post-processing computational fluid dynamics solutions about atmospheric entry vehicles is described as a supplement to the documentation within the source code. This BLAYER application and its ancillary utilities are in the public domain at https://sourceforge.net/projects/cfdutilities/. BLAYER was developed at NASA Ames Research Center in support of the DPLR (Data Parallel Line Relaxation) flow solver. Its underlying algorithm has since been incorporated by others into the LAURA and US3D flow solvers at NASA Langley Research Center and the University of Minnesota respectively. The essence of the algorithm is to locate the boundary layer edge by seeking the peak curvature in a total enthalpy profile. Turning that insight into a practical tool suited to a wide range of possible profiles has led to a hybrid two-stage method. The traditional method-location of (say) 99.5% of free-stream total enthalpy-remains an option, though it may be less robust. Details are provided and multiple examples are presented.

  12. Seismo-volcano source localization with triaxial broad-band seismic array

    NASA Astrophysics Data System (ADS)

    Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.

    2011-10-01

    Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.

  13. Leak localization and quantification with a small unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Golston, L.; Zondlo, M. A.; Frish, M. B.; Aubut, N. F.; Yang, S.; Talbot, R. W.

    2017-12-01

    Methane emissions from oil and gas facilities are a recognized source of greenhouse gas emissions, requiring cost-effective and reliable monitoring systems to support leak detection and repair programs. We describe a set of methods for locating and quantifying natural gas leaks using a small unmanned aerial system (sUAS) equipped with a path-integrated methane sensor along with ground-based wind measurements. The algorithms are developed as part of a system for continuous well pad scale (100 m2 area) monitoring, supported by a series of over 200 methane release trials covering multiple release locations and flow rates. Test measurements include data obtained on a rotating boom platform as well as flight tests on a sUAS. The system is found throughout the trials to reliably distinguish between cases with and without a methane release down to 6 scfh (0.032 g/s). Among several methods evaluated for horizontal localization, the location corresponding to the maximum integrated methane reading have performed best with a median error of ± 1 m if two or more flights are averaged, or ± 1.2 m for individual flights. Additionally, a method of rotating the data around the estimated leak location is developed, with the leak magnitude calculated as the average crosswind integrated flux in the region near the source location. Validation of these methods will be presented, including blind test results. Sources of error, including GPS uncertainty, meteorological variables, and flight pattern coverage, will be discussed.

  14. Acoustic 3D modeling by the method of integral equations

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  15. Spectral spatiotemporal imaging of cortical oscillations and interactions in the human brain

    PubMed Central

    Lin, Fa-Hsuan; Witzel, Thomas; Hämäläinen, Matti S.; Dale, Anders M.; Belliveau, John W.; Stufflebeam, Steven M.

    2010-01-01

    This paper presents a computationally efficient source estimation algorithm that localizes cortical oscillations and their phase relationships. The proposed method employs wavelet-transformed magnetoencephalography (MEG) data and uses anatomical MRI to constrain the current locations to the cortical mantle. In addition, the locations of the sources can be further confined with the help of functional MRI (fMRI) data. As a result, we obtain spatiotemporal maps of spectral power and phase relationships. As an example, we show how the phase locking value (PLV), that is, the trial-by-trial phase relationship between the stimulus and response, can be imaged on the cortex. We apply the method to spontaneous, evoked, and driven cortical oscillations measured with MEG. We test the method of combining MEG, structural MRI, and fMRI using simulated cortical oscillations along Heschl’s gyrus (HG). We also analyze sustained auditory gamma-band neuromagnetic fields from MEG and fMRI measurements. Our results show that combining the MEG recording with fMRI improves source localization for the non-noise-normalized wavelet power. In contrast, noise-normalized spectral power or PLV localization may not benefit from the fMRI constraint. We show that if the thresholds are not properly chosen, noise-normalized spectral power or PLV estimates may contain false (phantom) sources, independent of the inclusion of the fMRI prior information. The proposed algorithm can be used for evoked MEG/EEG and block-designed or event-related fMRI paradigms, or for spontaneous MEG data sets. Spectral spatiotemporal imaging of cortical oscillations and interactions in the human brain can provide further understanding of large-scale neural activity and communication between different brain regions. PMID:15488408

  16. Multi-channel photon counting DOT system based on digital lock-in detection technique

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng

    2011-02-01

    Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.

  17. Comparison of the accuracy of three algorithms in predicting accessory pathways among adult Wolff-Parkinson-White syndrome patients.

    PubMed

    Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice

    2015-12-01

    The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p < 0.001). Arruda was significantly better than d'Avila in predicting adjacent sites (p = 0.035) and the percent of the contralateral site prediction was higher with d'Avila than Arruda (p = 0.013). All algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.

  18. Clustering and classification of infrasonic events at Mount Etna using pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Cannata, A.; Montalto, P.; Aliotta, M.; Cassisi, C.; Pulvirenti, A.; Privitera, E.; Patanè, D.

    2011-04-01

    Active volcanoes generate sonic and infrasonic signals, whose investigation provides useful information for both monitoring purposes and the study of the dynamics of explosive phenomena. At Mt. Etna volcano (Italy), a pattern recognition system based on infrasonic waveform features has been developed. First, by a parametric power spectrum method, the features describing and characterizing the infrasound events were extracted: peak frequency and quality factor. Then, together with the peak-to-peak amplitude, these features constituted a 3-D ‘feature space’; by Density-Based Spatial Clustering of Applications with Noise algorithm (DBSCAN) three clusters were recognized inside it. After the clustering process, by using a common location method (semblance method) and additional volcanological information concerning the intensity of the explosive activity, we were able to associate each cluster to a particular source vent and/or a kind of volcanic activity. Finally, for automatic event location, clusters were used to train a model based on Support Vector Machine, calculating optimal hyperplanes able to maximize the margins of separation among the clusters. After the training phase this system automatically allows recognizing the active vent with no location algorithm and by using only a single station.

  19. New methods for interpretation of magnetic vector and gradient tensor data I: eigenvector analysis and the normalised source strength

    NASA Astrophysics Data System (ADS)

    Clark, David A.

    2012-09-01

    Acquisition of magnetic gradient tensor data is likely to become routine in the near future. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for several elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalised source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetisation direction. In combination the NSS and its vector gradient determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Inversion based on the vector gradient of the NSS over the Tallawang magnetite deposit obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Besides the geological applications, the algorithms for the dipole model are readily applicable to the detection, location and characterisation (DLC) of magnetic objects, such as naval mines, unexploded ordnance, shipwrecks, archaeological artefacts, and buried drums.

  20. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  1. The Optimal Location of GEODSS Sensors in Canada

    DTIC Science & Technology

    1991-02-01

    nteractive procedures for solving multiobjective transportation problems. A transportation problem is a classical linear programming problem where a...product must be transported from each of m sources to any of n destinations such that one or more objectives are optimized (36:96). The first algorithm...0, k - 1,...,L where z, is the fth element of zk The function z’(x) can now be optimized using any efficient, single-objectivc transportation

  2. Mining Co-Location Patterns with Clustering Items from Spatial Data Sets

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, Q.; Deng, G.; Yue, T.; Zhou, X.

    2018-05-01

    The explosive growth of spatial data and widespread use of spatial databases emphasize the need for the spatial data mining. Co-location patterns discovery is an important branch in spatial data mining. Spatial co-locations represent the subsets of features which are frequently located together in geographic space. However, the appearance of a spatial feature C is often not determined by a single spatial feature A or B but by the two spatial features A and B, that is to say where A and B appear together, C often appears. We note that this co-location pattern is different from the traditional co-location pattern. Thus, this paper presents a new concept called clustering terms, and this co-location pattern is called co-location patterns with clustering items. And the traditional algorithm cannot mine this co-location pattern, so we introduce the related concept in detail and propose a novel algorithm. This algorithm is extended by join-based approach proposed by Huang. Finally, we evaluate the performance of this algorithm.

  3. Acoustic emission localization based on FBG sensing network and SVR algorithm

    NASA Astrophysics Data System (ADS)

    Sai, Yaozhang; Zhao, Xiuxia; Hou, Dianli; Jiang, Mingshun

    2017-03-01

    In practical application, carbon fiber reinforced plastics (CFRP) structures are easy to appear all sorts of invisible damages. So the damages should be timely located and detected for the safety of CFPR structures. In this paper, an acoustic emission (AE) localization system based on fiber Bragg grating (FBG) sensing network and support vector regression (SVR) is proposed for damage localization. AE signals, which are caused by damage, are acquired by high speed FBG interrogation. According to the Shannon wavelet transform, time differences between AE signals are extracted for localization algorithm based on SVR. According to the SVR model, the coordinate of AE source can be accurately predicted without wave velocity. The FBG system and localization algorithm are verified on a 500 mm×500 mm×2 mm CFRP plate. The experimental results show that the average error of localization system is 2.8 mm and the training time is 0.07 s.

  4. Positioning performance analysis of the time sum of arrival algorithm with error features

    NASA Astrophysics Data System (ADS)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  5. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    NASA Astrophysics Data System (ADS)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  6. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  7. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    NASA Astrophysics Data System (ADS)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  8. Locating and decoding barcodes in fuzzy images captured by smart phones

    NASA Astrophysics Data System (ADS)

    Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.

  9. Microseismic monitoring of soft-rock landslide: contribution of a 3D velocity model for the location of seismic sources.

    NASA Astrophysics Data System (ADS)

    Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter

    2015-04-01

    Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.

  10. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  11. Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation

    NASA Astrophysics Data System (ADS)

    Chi, Yuejie

    2017-08-01

    We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.

  12. Investigation of Finite Sources through Time Reversal

    NASA Astrophysics Data System (ADS)

    Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie

    2010-05-01

    Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.

  13. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  14. Privacy-Preserving Location-Based Query Using Location Indexes and Parallel Searching in Distributed Networks

    PubMed Central

    Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579

  15. Privacy-preserving location-based query using location indexes and parallel searching in distributed networks.

    PubMed

    Zhong, Cheng; Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.

  16. Automated novelty detection in the WISE survey with one-class support vector machines

    NASA Astrophysics Data System (ADS)

    Solarz, A.; Bilicki, M.; Gromadzki, M.; Pollo, A.; Durkalec, A.; Wypych, M.

    2017-10-01

    Wide-angle photometric surveys of previously uncharted sky areas or wavelength regimes will always bring in unexpected sources - novelties or even anomalies - whose existence and properties cannot be easily predicted from earlier observations. Such objects can be efficiently located with novelty detection algorithms. Here we present an application of such a method, called one-class support vector machines (OCSVM), to search for anomalous patterns among sources preselected from the mid-infrared AllWISE catalogue covering the whole sky. To create a model of expected data we train the algorithm on a set of objects with spectroscopic identifications from the SDSS DR13 database, present also in AllWISE. The OCSVM method detects as anomalous those sources whose patterns - WISE photometric measurements in this case - are inconsistent with the model. Among the detected anomalies we find artefacts, such as objects with spurious photometry due to blending, but more importantly also real sources of genuine astrophysical interest. Among the latter, OCSVM has identified a sample of heavily reddened AGN/quasar candidates distributed uniformly over the sky and in a large part absent from other WISE-based AGN catalogues. It also allowed us to find a specific group of sources of mixed types, mostly stars and compact galaxies. By combining the semi-supervised OCSVM algorithm with standard classification methods it will be possible to improve the latter by accounting for sources which are not present in the training sample, but are otherwise well-represented in the target set. Anomaly detection adds flexibility to automated source separation procedures and helps verify the reliability and representativeness of the training samples. It should be thus considered as an essential step in supervised classification schemes to ensure completeness and purity of produced catalogues. The catalogues of outlier data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A39

  17. Seamless lesion insertion in digital mammography: methodology and reader study

    NASA Astrophysics Data System (ADS)

    Pezeshk, Aria; Petrick, Nicholas; Sahiner, Berkman

    2016-03-01

    Collection of large repositories of clinical images containing verified cancer locations is costly and time consuming due to difficulties associated with both the accumulation of data and establishment of the ground truth. This problem poses a significant challenge to the development of machine learning algorithms that require large amounts of data to properly train and avoid overfitting. In this paper we expand the methods in our previous publications by making several modifications that significantly increase the speed of our insertion algorithms, thereby allowing them to be used for inserting lesions that are much larger in size. These algorithms have been incorporated into an image composition tool that we have made publicly available. This tool allows users to modify or supplement existing datasets by seamlessly inserting a real breast mass or micro-calcification cluster extracted from a source digital mammogram into a different location on another mammogram. We demonstrate examples of the performance of this tool on clinical cases taken from the University of South Florida Digital Database for Screening Mammography (DDSM). Finally, we report the results of a reader study evaluating the realism of inserted lesions compared to clinical lesions. Analysis of the radiologist scores in the study using receiver operating characteristic (ROC) methodology indicates that inserted lesions cannot be reliably distinguished from clinical lesions.

  18. Detecting the red tide based on remote sensing data in optically complex East China Sea

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohui; Pan, Delu; Mao, Zhihua; Tao, Bangyi; Liu, Qiong

    2012-09-01

    Red tide not only destroys marine fishery production, deteriorates the marine environment, affects coastal tourist industry, but also causes human poison, even death by eating toxic seafood contaminated by red tide organisms. Remote sensing technology has the characteristics of large-scale, synchronized, rapid monitoring, so it is one of the most important and most effective means of red tide monitoring. This paper selects the high frequency red tides areas of the East China Sea as study area, MODIS/Aqua L2 data as the data source, analysis and compares the spectral differences in the red tide water bodies and non-red tide water bodies of many historical events. Based on the spectral differences, this paper develops the algorithm of Rrs555/Rrs488> 1.5 to extract the red tide information. Apply the algorithm on red tide event happened in the East China Sea on May 28, 2009 to extract the information of red tide, and found that the method can determine effectively the location of the occurrence of red tide; there is a good corresponding relationship between red tide extraction result and chlorophyll a concentration extracted by remote sensing, shows that these algorithm can determine effectively the location and extract the red tide information.

  19. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information

    PubMed Central

    2010-01-01

    Background Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. Results We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. Conclusions The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to the positional constraints. For a fair comparison, we also apply our proposed method without positional constraints. This comparison shows that these results contain less crossings than those of the spring embedder algorithm. We also compared layouts of the proposed algorithm with and without compartment update and verified that latter can reach better local optima. PMID:20565884

  20. Hydro-Piezoelectricity: A Renewable Energy Source for Autonomous Underwater Vehicles

    DTIC Science & Technology

    1999-09-30

    having capacities of a few watts to hundreds of kW. Based on a unique Wave Energy Converter ( WEC ) buoy and intelligent power take-off algorithms, the... environmental monitoring. In addition, there will be significant dual use in the commercial sector for power generation in remote locations where the...2.5 meter by 6.5 meter long WEC at the LEO 15 site of Rutgers University. b. Multiple sensor outputs and performance data were reliably

  1. Development of an Automated Modality-Independent Elastographic Image Analysis System for Tumor Screening

    DTIC Science & Technology

    2007-02-01

    determined by its neighbors’ correspondence. Thus, the algorithm consists of four main steps: ICP registration of the base and nipple regions of the...the nipple and the base of the breast, as a location for accurately determining initial correspondence. However, due to the compression, the nipple of...cloud) is translated and lies at a different angle than the nipple of the pendant breast (the source point cloud). By minimizing the average distance

  2. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. An IR Navigation System for Pleural PDT

    NASA Astrophysics Data System (ADS)

    Zhu, Timothy; Liang, Xing; Kim, Michele; Finlay, Jarod; Dimofte, Andreea; Rodriguez, Carmen; Simone, Charles; Friedberg, Joseph; Cengel, Keith

    2015-03-01

    Pleural photodynamic therapy (PDT) has been used as an adjuvant treatment with lung-sparing surgical treatment for malignant pleural mesothelioma (MPM). In the current pleural PDT protocol, a moving fiber-based point source is used to deliver the light. The light fluences at multiple locations are monitored by several isotropic detectors placed in the pleural cavity. To improve the delivery of light fluence uniformity, an infrared (IR) navigation system is used to track the motion of the light source in real-time at a rate of 20 - 60 Hz. A treatment planning system uses the laser source positions obtained from the IR camera to calculate light fluence distribution to monitor the light dose uniformity on the surface of the pleural cavity. A novel reconstruction algorithm is used to determine the pleural cavity surface contour. A dual-correction method is used to match the calculated fluences at detector locations to the detector readings. Preliminary data from a phantom shows superior light uniformity using this method. Light fluence uniformity from patient treatments is also shown with and without the correction method.

  4. BiPACE 2D--graph-based multiple alignment for comprehensive 2D gas chromatography-mass spectrometry.

    PubMed

    Hoffmann, Nils; Wilhelm, Mathias; Doebbe, Anja; Niehaus, Karsten; Stoye, Jens

    2014-04-01

    Comprehensive 2D gas chromatography-mass spectrometry is an established method for the analysis of complex mixtures in analytical chemistry and metabolomics. It produces large amounts of data that require semiautomatic, but preferably automatic handling. This involves the location of significant signals (peaks) and their matching and alignment across different measurements. To date, there exist only a few openly available algorithms for the retention time alignment of peaks originating from such experiments that scale well with increasing sample and peak numbers, while providing reliable alignment results. We describe BiPACE 2D, an automated algorithm for retention time alignment of peaks from 2D gas chromatography-mass spectrometry experiments and evaluate it on three previously published datasets against the mSPA, SWPA and Guineu algorithms. We also provide a fourth dataset from an experiment studying the H2 production of two different strains of Chlamydomonas reinhardtii that is available from the MetaboLights database together with the experimental protocol, peak-detection results and manually curated multiple peak alignment for future comparability with newly developed algorithms. BiPACE 2D is contained in the freely available Maltcms framework, version 1.3, hosted at http://maltcms.sf.net, under the terms of the L-GPL v3 or Eclipse Open Source licenses. The software used for the evaluation along with the underlying datasets is available at the same location. The C.reinhardtii dataset is freely available at http://www.ebi.ac.uk/metabolights/MTBLS37.

  5. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.

  6. Independent component analysis of EEG dipole source localization in resting and action state of brain

    NASA Astrophysics Data System (ADS)

    Almurshedi, Ahmed; Ismail, Abd Khamim

    2015-04-01

    EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.

  7. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  8. Three dimensional time reversal optical tomography

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.

    2011-03-01

    Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.

  9. Coupling a Neural Network with Atmospheric Flow Simulations to Locate and Quantify CH4 Emissions at Well Pads

    NASA Astrophysics Data System (ADS)

    Travis, B. J.; Sauer, J.; Dubey, M. K.

    2017-12-01

    Methane (CH4) leaks from oil and gas production fields are a potentially significant source of atmospheric methane. US DOE's ARPA-E office is supporting research to locate methane emissions at 10 m size well pads to within 1 m. A team led by Aeris Technologies, and that includes LANL, Planetary Science Institute and Rice University has developed an autonomous leak detection system (LDS) employing a compact laser absorption methane sensor, a sonic anemometer and multiport sampling. The LDS system analyzes monitoring data using a convolutional neural network (cNN) to locate and quantify CH4 emissions. The cNN was trained using three sources: (1) ultra-high-resolution simulations of methane transport provided by LANL's coupled atmospheric transport model HIGRAD, for numerous controlled methane release scenarios and methane sampling configurations under variable atmospheric conditions, (2) Field tests at the METEC site in Ft. Collins, CO., and (3) Field data from other sites where point-source surface methane releases were monitored downwind. A cNN learning algorithm is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a well pad. Recent studies with our cNN emphasize the importance of tracking wind speeds and directions at fine resolution ( 1 second), and accounting for variations in background CH4 levels. A few cases illustrate the importance of sufficiently long monitoring; short monitoring may not provide enough information to determine accurately a leak location or strength, mainly because of short-term unfavorable wind directions and choice of sampling configuration. Length of multiport duty cycle sampling and sample line flush time as well as number and placement of monitoring sensors can significantly impact ability to locate and quantify leaks. Source location error at less than 10% requires about 30 or more training cases.

  10. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  11. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  12. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE PAGES

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...

    2018-05-29

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  13. Methane Leak Detection and Emissions Quantification with UAVs

    NASA Astrophysics Data System (ADS)

    Barchyn, T.; Fox, T. A.; Hugenholtz, C.

    2016-12-01

    Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.

  14. The Chandra Source Catalog: Algorithms

    NASA Astrophysics Data System (ADS)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  15. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  16. Identifying Patterns in the Weather of Europe for Source Term Estimation

    NASA Astrophysics Data System (ADS)

    Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis

    2017-04-01

    During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal parameters, using the Java flavour of the Euratom-supported funded RODOS (Real-time On-line DecisiOn Support) system2 for off-site emergency management after nuclear accidents. Once dispersions have been pre-computed, and immediately after a detected release, the currently observed weather can be matched to the derived weather classes. Since each weather class corresponds to a different plume dispersion pattern, the closest classes to an unseen weather sample, say the current weather, are the most likely to lead us to the release origin. In addressing the above problem, we make use of multiple years of weather reanalysis data from NCAR's version3 of ECMWF's ERA-Interim4. To derive useful weather classes, we evaluate several algorithms, ranging from straightforward unsupervised clustering to more complex methods, including relevant neural-network algorithms, on multiple variables. Variables and feature sets, clustering algorithms and evaluation approaches are all dealt with and presented experimentally. The Big Data Europe platform allows for the implementation and execution of the above tasks in the cloud, in a scalable, robust and efficient way.

  17. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  18. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  19. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  20. Monochromatic body waves excited by great subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Ihmlé, Pierre F.; Madariaga, Raúl

    Large quasi-monochromatic body waves were excited by the 1995 Chile Mw=8.1 and by the 1994 Kurile Mw=8.3 events. They are observed on vertical/radial component seismograms following the direct P and Pdiff arrivals, at all azimuths. We devise a slant stack algorithm to characterize the source of the oscillations. This technique aims at locating near-source isotropic scatterers using broadband data from global networks. For both events, we find that the oscillations emanate from the trench. We show that these monochromatic waves are due to localized oscillations of the water column. Their period corresponds to the gravest ID mode of a water layer for vertically traveling compressional waves. We suggest that these monochromatic body waves may yield additional constraints on the source process of great subduction zone earthquakes.

  1. Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis

    NASA Astrophysics Data System (ADS)

    Moridnejad, A.; Karimi, N.; Ariya, P. A.

    2014-12-01

    The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.

  2. Earthquake classification, location, and error analysis in a volcanic environment: implications for the magmatic system of the 1989-1990 eruptions at redoubt volcano, Alaska

    USGS Publications Warehouse

    Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.

    1994-01-01

    Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of magma from a plexus of dikes and/or sills located in the 6 to 10 km depth range. Precise relocations of selected events prior to January 2 clearly resolve a narrow, steeply dipping, pencil-shaped concentration of activity in the depth range of 1-7 km, which illuminates the conduit along which magma was transported to the surface. A third event type, named hybrid, which blends the characteristics of both VT and LP events, originates just below the LP source, and may reflect brittle failure along a zone intersecting a fluid-filled crack. The distribution of hybrid events is elongated 0.2-0.4 km in an east-west direction. This distribution may offer constraints on the orientation and size of the fluid-filled crack inferred to be the source of the LP events. ?? 1994.

  3. Spectral mapping tools from the earth sciences applied to spectral microscopy data.

    PubMed

    Harris, A Thomas

    2006-08-01

    Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.

  4. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  5. Spatiotemporal patterns of ERP based on combined ICA-LORETA analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Guo, Taomei; Xu, Yaqin; Zhao, Xiaojie; Yao, Li

    2007-03-01

    In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and effective method to study brain dynamics from ERP.

  6. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    PubMed Central

    2012-01-01

    Background Quantitative trait loci (QTL) detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i.e. designs with low power. Practical recommendations for experimental designs for QTL detection in outbred populations are given on the basis of this bias quantification. Furthermore, an original algorithm is proposed to adjust the location of a QTL, obtained with interval mapping, which co located with a marker. Conclusions Therefore, one should be attentive when one QTL is mapped at the location of one marker, especially under low power conditions. PMID:22520935

  7. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  8. Towards the accounting of ocean wave noise in the real-time characterization of explosion detection capability

    NASA Astrophysics Data System (ADS)

    Walker, K. T.

    2013-12-01

    The detection of nuclear explosions depends on the signal-to-noise ratio of recorded signals. Cross-correlation-based array processing algorithms, such as that used by the International Data Center for nuclear monitoring, locks onto the dominant signal, masking weaker signals from potential sources of interest along different back azimuths. Microbaroms and microseisms are continuous sources of acoustic and seismic noise in the ~0.1 to 0.5 Hz range radiated by areas in the ocean where two opposing wave sets with the same period coexist. These sources of energy travel tens of thousands of kilometers and routinely dominate the recorded spectra. For any given time, this noise may render useless several arrays that are otherwise in a good position to detect an explosion. It would therefore be useful to know in real-time where such noise is expected to cause problems in event detection and location. In this presentation, I show that there is a potential to use the NOAA Wave Watch 3 (NWW3) modeling program to routinely output in real-time a prediction of the global distribution of microbarom and microseism sources. I do this by presenting a study of the detailed analysis of 12 microphone arrays around the North Pacific that recorded microbaroms during 2010. For this analysis, a time-progressive, frequency-domain beamforming approach is implemented. It is assumed that microbarom sources are illuminated by a high density of intersecting dominant microbarom back azimuths. Common pelagic sources move around the North Pacific during the boreal winter. Summertime North Pacific sources are only observed by western Pacific arrays, presumably a result of weaker microbarom radiation and westward stratospheric winds. A well-defined source is resolved ˜2000 km off the coast of California in January 2011 that moves closer to land over several days. The source locations are corrected for deflection by horizontal winds using acoustic ray trace modeling with range-dependent atmospheric specifications provided by publicly available NOAA/NASA models. The observed source locations do not correlate with anomalies in NWW3 model field data. However, application of the opposing-wave, microbarom source model of Waxler and Gilbert (2006) to the NWW3 directional wave height spectra output at buoy locations within 1100 km of the western North America coastline predicts microbarom radiation in locations that correlate with observed microbarom locations. Therefore, the availability of microbarom source strength maps as a real-time product from NWW3 is simply dependent on an additional, minor code revision. Success in such a revision has recently been reported by Drs. Fabrice Ardhuin and Justin Stopa.

  9. Development and Control of the Naval Postgraduate School Planar Autonomous Docking Simulator (NPADS)

    NASA Astrophysics Data System (ADS)

    Porter, Robert D.

    2002-09-01

    The objective of this thesis was to design, construct and develop the initial autonomous control algorithm for the NPS Planar Autonomous Docking Simulator (NPADS) The effort included hardware design, fabrication, installation and integration; mass property determination; and the development and testing of control laws utilizing MATLAB and Simulink for modeling and LabView for NPADS control, The NPADS vehicle uses air pads and a granite table to simulate a 2-D, drag-free, zero-g space environment, It is a completely self-contained vehicle equipped with eight cold-gas, bang-bang type thrusters and a reaction wheel for motion control, A 'star sensor' CCD camera locates the vehicle on the table while a color CCD docking camera and two robotic arms will locate and dock with a target vehicle, The on-board computer system leverages PXI technology and a single source, simplifying systems integration, The vehicle is powered by two lead-acid batteries for completely autonomous operation, A graphical user interface and wireless Ethernet enable the user to command and monitor the vehicle from a remote command and data acquisition computer. Two control algorithms were developed and allow the user to either control the thrusters and reaction wheel manually or simply specify a desired location and rotation angle,

  10. Using the Mean Shift Algorithm to Make Post Hoc Improvements to the Accuracy of Eye Tracking Data Based on Probable Fixation Locations

    DTIC Science & Technology

    2010-08-01

    astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate

  11. Non-traditional Physics-based Inverse Approaches for Determining a Buried Object’s Location

    DTIC Science & Technology

    2008-09-01

    parameterization of its time-decay curve) in dipole models ( Pasion and Oldenburg, 2001) or the amplitudes of responding magnetic sources in the NSMS...commonly in use. According to the simple dipole model ( Pasion and Oldenburg, 2001), the secondary magnetic field due to the dipole m is 3 0 1 ˆ ˆ(3...Forum, St. Louis, MO. L. R. Pasion and D. W. Oldenburg (2001), “A discrimination algorithm for UXO using time domain electromagnetics.” J. Environ

  12. Transport and dispersion of pollutants in surface impoundments: a finite difference model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model by finite-difference (SIMFD) has been developed. SIMFD computes the flow rate, velocity field, and the concentration distribution of pollutants in surface impoundments with any number of islands located within the region of interest. Theoretical derivations and numerical algorithm are described in detail. Instructions for the application of SIMFD and listings of the FORTRAN IV source program are provided. Two sample problems are given to illustrate the application and validity of the model.

  13. Data-driven modeling of hydroclimatic trends and soil moisture: Multi-scale data integration and decision support

    NASA Astrophysics Data System (ADS)

    Coopersmith, Evan Joseph

    The techniques and information employed for decision-making vary with the spatial and temporal scope of the assessment required. In modern agriculture, the farm owner or manager makes decisions on a day-to-day or even hour-to-hour basis for dozens of fields scattered over as much as a fifty-mile radius from some central location. Following precipitation events, land begins to dry. Land-owners and managers often trace serpentine paths of 150+ miles every morning to inspect the conditions of their various parcels. His or her objective lies in appropriate resource usage -- is a given tract of land dry enough to be workable at this moment or would he or she be better served waiting patiently? Longer-term, these owners and managers decide upon which seeds will grow most effectively and which crops will make their operations profitable. At even longer temporal scales, decisions are made regarding which fields must be acquired and sold and what types of equipment will be necessary in future operations. This work develops and validates algorithms for these shorter-term decisions, along with models of national climate patterns and climate changes to enable longer-term operational planning. A test site at the University of Illinois South Farms (Urbana, IL, USA) served as the primary location to validate machine learning algorithms, employing public sources of precipitation and potential evapotranspiration to model the wetting/drying process. In expanding such local decision support tools to locations on a national scale, one must recognize the heterogeneity of hydroclimatic and soil characteristics throughout the United States. Machine learning algorithms modeling the wetting/drying process must address this variability, and yet it is wholly impractical to construct a separate algorithm for every conceivable location. For this reason, a national hydrological classification system is presented, allowing clusters of hydroclimatic similarity to emerge naturally from annual regime curve data and facilitate the development of cluster-specific algorithms. Given the desire to enable intelligent decision-making at any location, this classification system is developed in a manner that will allow for classification anywhere in the U.S., even in an ungauged basin. Daily time series data from 428 catchments in the MOPEX database are analyzed to produce an empirical classification tree, partitioning the United States into regions of hydroclimatic similarity. In constructing a classification tree based upon 55 years of data, it is important to recognize the non-stationary nature of climate data. The shifts in climatic regimes will cause certain locations to shift their ultimate position within the classification tree, requiring decision-makers to alter land usage, farming practices, and equipment needs, and algorithms to adjust accordingly. This work adapts the classification model to address the issue of regime shifts over larger temporal scales and suggests how land-usage and farming protocol may vary from hydroclimatic shifts in decades to come. Finally, the generalizability of the hydroclimatic classification system is tested with a physically-based soil moisture model calibrated at several locations throughout the continental United States. The soil moisture model is calibrated at a given site and then applied with the same parameters at other sites within and outside the same hydroclimatic class. The model's performance deteriorates minimally if the calibration and validation location are within the same hydroclimatic class, but deteriorates significantly if the calibration and validates sites are located in different hydroclimatic classes. These soil moisture estimates at the field scale are then further refined by the introduction of LiDAR elevation data, distinguishing faster-drying peaks and ridges from slower-drying valleys. The inclusion of LiDAR enabled multiple locations within the same field to be predicted accurately despite non-identical topography. This cross-application of parametric calibrations and LiDAR-driven disaggregation facilitates decision-support at locations without proximally-located soil moisture sensors.

  14. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  15. Ecological criteria, participant preferences and location models: A GIS approach toward ATV trail planning

    Treesearch

    Stephanie A. Snyder; Jay H. Whitmore; Ingrid E. Schneider; Dennis R. Becker

    2008-01-01

    This paper presents a geographic information system (GIS)-based method for recreational trail location for all-terrain vehicles (ATVs) which considers environmental factors, as well as rider preferences for trail attributes. The method utilizes the Least-Cost Path algorithm within a GIS framework to optimize trail location. The trail location algorithm considered trail...

  16. eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm

    NASA Astrophysics Data System (ADS)

    Lisowski, S.; Friberg, P. A.; Sheen, D. H.

    2017-12-01

    A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.

  17. Significance and interest of dense seismic arrays for understanding the mechanics of clayey landslides: a test case of 150 nodes at Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Provost, Floriane; Malet, Jean-Philippe; Hibert, Clément; Vergne, Jérôme

    2017-04-01

    Clayey landslides present various seismic sources generated by the slope deformation (rockfall, slidequakes, tremors, fluid transfers). However, the characterization of the micro-seismicity and the construction of advanced catalogs (classification of the seismic source, time, and location) are complex for such objects because of the variety of recorded signals, the low signal to noise ratios, the highly attenuating medium, and the small size of the object that limits the picking of the P and S-waves. A full understanding of the seismic sources is hence often difficult because of the few number of seismometers, the large distance source-to-sensor (> 50m) and because of the lack of a continous spatially distributed record of the slope deformation. Recent progress in the geophysical instrumentation allowed the deployment of a dense network of 150 ZLand nodes (Tesla Corp.) combined with a Ground-Based InSAR sensor (IDS, IBIS-FM) for a period of ca. 2 months at the Super-Sauze clayey landslide (South French Alps). The Zland nodes are vertical wireless seismometers with 12 days autonomy. Three nodes were co-located at 50 locations in the most active part of the landslide and above the main scarp with a sensor-to-sensor distance of ca. 50m and a sample frequency of 400Hz. The Ground-Based InSAR sensor was installed in front of the landslide at a distance of ca. 800m and acquired an image every 15 minutes. The seismic events are detected automatically based on their spectrogram content with Signal-to-Noise Ratio (SNR) larger than 1.5 and automatically classified using the Random Forest algorithm. The landslide endogenous sources are then located by optimization of the inter-trace correlation of the first arrivals. This experiment aims to document the deformation of the landslide by combining surface and in depth information and provides a new insight into the seismic sources interpretation. The spatial distribution of the deformation is compared to the location of the endogenous seismic events in order to analyze seismic vs. aseismic deformation.

  18. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  19. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  20. Development, Evaluation, and Application of a Primary Aerosol Model.

    PubMed

    Wang, I T; Chico, T; Huang, Y H; Farber, R J

    1999-09-01

    The Segmented-Plume Primary Aerosol Model (SPPAM) has been developed over the past several years. The earlier model development goals were simply to generalize the widely used Industrial Source Complex Short-Term (ISCST) model to simulate plume transport and dispersion under light wind conditions and to handle a large number of roadway or line sources. The goals have been expanded to include development of improved algorithm for effective plume transport velocity, more accurate and efficient line and area source dispersion algorithms, and recently, a more realistic and computationally efficient algorithm for plume depletion due to particle dry deposition. A performance evaluation of the SPPAM has been carried out using the 1983 PNL dual tracer experimental data. The results show the model predictions to be in good agreement with observations in both plume advection-dispersion and particulate matter (PM) depletion by dry deposition. For PM 2.5 impact analysis, the SPPAM has been applied to the Rubidoux area of California. Emission sources included in the modeling analysis are: paved road dust, diesel vehicular exhaust, gasoline vehicular exhaust, and tire wear particles from a large number of roadways in Rubidoux and surrounding areas. For the selected modeling periods, the predicted primary PM 2.5 to primary PM10 concentration ratios for the Rubidoux sampling station are in the range of 0.39-0.46. The organic fractions of the primary PM 2.5 impacts are estimated to be at least 34-41%. Detailed modeling results indicate that the relatively high organic fractions are primarily due to the proximity of heavily traveled roadways north of the sampling station. The predictions are influenced by a number of factors; principal among them are the receptor locations relative to major roadways, the volume and composition of traffic on these roadways, and the prevailing meteorological conditions.

  1. Near-field noise prediction for aircraft in cruising flight: Methods manual. [laminar flow control noise effects analysis

    NASA Technical Reports Server (NTRS)

    Tibbetts, J. G.

    1979-01-01

    Methods for predicting noise at any point on an aircraft while the aircraft is in a cruise flight regime are presented. Developed for use in laminar flow control (LFC) noise effects analyses, they can be used in any case where aircraft generated noise needs to be evaluated at a location on an aircraft while under high altitude, high speed conditions. For each noise source applicable to the LFC problem, a noise computational procedure is given in algorithm format, suitable for computerization. Three categories of noise sources are covered: (1) propulsion system, (2) airframe, and (3) LFC suction system. In addition, procedures are given for noise modifications due to source soundproofing and the shielding effects of the aircraft structure wherever needed. Sample cases, for each of the individual noise source procedures, are provided to familiarize the user with typical input and computed data.

  2. Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.; Ovall, J.; Holst, M.

    2014-12-01

    We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.

  3. Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications

    NASA Astrophysics Data System (ADS)

    Liang, C.; Yu, Y.

    2017-12-01

    The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.

  4. Revision of earthquake hypocenter locations in GEOFON bulletin data using global source-specific station terms technique

    NASA Astrophysics Data System (ADS)

    Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.

    2015-12-01

    The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations and sharper images of the seismicity compared to the initial locations.

  5. Body wave tomography of Iranian Plateau

    NASA Astrophysics Data System (ADS)

    Alinaghi, A.; Koulakov, I.; Thybo, H.

    2004-12-01

    The inverse teleseismic tomography approach has been adopted to study the P and S velocity structure of the crust and upper mantle across the Iranian Plateau. The method uses phase readings from earthquakes in a study area as reported by stations at teleseismic and regional distances to compute the velocity anomalies in the area. This use of source-receiver reciprocity allows tomographic studies of regions with sparse distribution of seismic stations, if only the region has sufficient seismicity. The input data for the algorithm are the arrival times of events located in Iran which were taken from the ISC catalogue (1964-1996). All the sources were located anew using a 1D spherical Earth model taking into account variable Moho depth and topography. The inversion provides relocation of events which is done simultaneously with calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm to resolve both fancy and realistic anomalies using available earthquake sources and introducing measurement errors and outliers. The velocity anomalies show that the crust and upper mantle below the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block, in agreement with models of the active Iranian plate trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of subduction at Makran in the southeastern corner of Iran where the oceanic crust of the Oman Sea subducts underneath the Iranian Plateau, a movement which is mainly aseismic. On the other hand, the subduction and collision of the two plates along the Zagros suture zone is highly seismic and in our images appear less consistent than the Makran region.

  6. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  8. [GNU Pattern: open source pattern hunter for biological sequences based on SPLASH algorithm].

    PubMed

    Xu, Ying; Li, Yi-xue; Kong, Xiang-yin

    2005-06-01

    To construct a high performance open source software engine based on IBM SPLASH algorithm for later research on pattern discovery. Gpat, which is based on SPLASH algorithm, was developed by using open source software. GNU Pattern (Gpat) software was developped, which efficiently implemented the core part of SPLASH algorithm. Full source code of Gpat was also available for other researchers to modify the program under the GNU license. Gpat is a successful implementation of SPLASH algorithm and can be used as a basic framework for later research on pattern recognition in biological sequences.

  9. Einstein@Home Finds an Elusive Pulsar

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-08-01

    Since the release of the second Fermi-LAT catalog in 2012, astronomers have been hunting for 3FGL J1906.6+0720, a gamma-ray source whose association couldn't be identified. Now, personal-computer time volunteered through the Einstein@Home project has resulted in the discovery of a pulsar that has been hiding from observers for years. A Blind Search: Identifying sources detected by Fermi-LAT can be tricky: the instrument's sky resolution is limited, so the position of the source can be hard to pinpoint. The gamma-ray source 3FGL J1906.6+0720 appeared in both the second and third Fermi-LAT source catalogs, but even after years of searching, no associated radio or X-ray source had been found. A team of researchers, led by Colin Clark of the Max Planck Institute for Gravitational Physics, suspected that the source might be a gamma-ray pulsar. To confirm this, however, they needed to detect pulsed emission — something inherently difficult given the low photon count and the uncertain position of the source. The team conducted a blind search for pulsations coming from the general direction of the gamma-ray source. Two things were needed for this search: clever data analysis and a lot of computing power. The data analysis algorithm was designed to be adaptive: it searched a 4-dimensional parameter space that included a safety margin, allowing the algorithm to wander if the source was at the edge of the parameter space. The computing power was contributed by tens of thousands of personal computers volunteered by participants in the Einstein@Home project, making much shorter work out of a search that would have required dozens of years on a single laptop. The sky region around the newly discovered pulsar. The dotted ellipse shows the 3FGL catalog 95% confidence region for the source. The data analysis algorithm was designed to search an area 50% larger (given by the dashed ellipse), but it was allowed to “walk away” within the gray shaded region if the source seemed to be at the edge. This adaptive flexibility is what allowed the pulsar's actual location — shown by the solid ellipse, inset — to be found. [Clark et al., 2015] Teasing the Signal From Misinformation: This new search method turned out to be exactly what was needed: the team was able to identify pulsed emission coming from a point located significantly outside of the source's original confidence region. This source, now named PSR J1906+0722, has been determined to be a young, energetic, isolated pulsar, and the team was able to identify its spin parameters and pulse history from the last six years. So why was this mysterious pulsar so hard to find? Its misidentified position, most likely due to a secondary source that had been lumped with the pulsar into a single source, was the biggest factor. Additionally, the pulsar had an enormous glitch one year into Fermi observations of it, making it even more difficult to pin down its pulse profile. The team’s success in spite of these complications indicates the value of their methods for identifying other young, gamma-ray, radio-quiet pulsars that are likely hidden in the Fermi-LAT catalog. Citation: C. J. Clark et al. 2015 ApJ, 809, L2. doi:10.1088/2041-8205/809/1/L2

  10. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  11. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  12. Development of double-pair double difference location algorithm and its application to the regular earthquakes and non-volcanic tremors

    NASA Astrophysics Data System (ADS)

    Guo, H.; Zhang, H.

    2016-12-01

    Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.

  13. Using a fast-neutron spectrometer system to candle luggage for hidden explosives

    NASA Astrophysics Data System (ADS)

    Lefevre, Harlan W.; Rasmussen, R. J.; Chmelik, Michael S.; Schofield, R. M. S.; Sieger, G. E.; Overley, Jack C.

    1997-02-01

    A continuous spectrum of neutron switch energies up to 8.2 MeV is produced by a 4.2-MeV nanosecond-pulsed deuteron beam slowing down in a thick beryllium target. The spectrum form the locally shielded target is collimated to a horizontal fan-beam and delivered to a row of 16, 6-cm square plastic scintillators located 4 m from the neutron source. The scintillators are coupled to 12-stage photomultiplier tubes, constant-fraction discriminators, time-to-amplitude converters, analog-to-digital converters, and digital memories. Unattenuated neutron-source spectra and background spectra ar recorded. Luggage is stepped through the fan beam by an automated lift located 2 m from the neutron source. Transmission spectra are measured, and are transferred to a computer while the location is advanced one pixel width. As the next set of spectra is being measured, the computer calculates neutron attenuations for the previous set, deconvolutes attenuations into projected elemental number densities, and determines the explosive likelihood for each pixel. With a time-averaged deuteron beam current o 1(mu) A, a suitcase 60-cm long can be automatically imaged in 1600s. We will suggest that time can be reduced to 8s or less with straight-forward improvements. The following paper describes the explosives recognition algorithm and presents the results of teste with explosives.

  14. Inaccuracy of Wolff-Parkinson-white accessory pathway localization algorithms in children and patients with congenital heart defects.

    PubMed

    Bar-Cohen, Yaniv; Khairy, Paul; Morwood, James; Alexander, Mark E; Cecchin, Frank; Berul, Charles I

    2006-07-01

    ECG algorithms used to localize accessory pathways (AP) in patients with Wolff-Parkinson-White (WPW) syndrome have been validated in adults, but less is known of their use in children, especially in patients with congenital heart disease (CHD). We hypothesize that these algorithms have low diagnostic accuracy in children and even lower in those with CHD. Pre-excited ECGs in 43 patients with WPW and CHD (median age 5.4 years [0.9-32 years]) were evaluated and compared to 43 consecutive WPW control patients without CHD (median age 14.5 years [1.8-18 years]). Two blinded observers predicted AP location using 2 adult and 1 pediatric WPW algorithms, and a third blinded observer served as a tiebreaker. Predicted locations were compared with ablation-verified AP location to identify (a) exact match for AP location and (b) match for laterality (left-sided vs right-sided AP). In control children, adult algorithms were accurate in only 56% and 60%, while the pediatric algorithm was correct in 77%. In 19 patients with Ebstein's anomaly, diagnostic accuracy was similar to controls with at times an even better ability to predict laterality. In non-Ebstein's CHD, however, the algorithms were markedly worse (29% for the adult algorithms and 42% for the pediatric algorithms). A relatively large degree of interobserver variability was seen (kappa values from 0.30 to 0.58). Adult localization algorithms have poor diagnostic accuracy in young patients with and without CHD. Both adult and pediatric algorithms are particularly misleading in non-Ebstein's CHD patients and should be interpreted with caution.

  15. The Development of Mobile Application to Introduce Historical Monuments in Manado

    NASA Astrophysics Data System (ADS)

    Rupilu, Moshe Markhasi; Suyoto; Santoso, Albertus Joko

    2018-02-01

    Learning the historical value of a monument is important because it preserves cultural and historical values, as well as expanding our personal insight. In Indonesia, particularly in Manado, North Sulawesi, there are many monuments. The monuments are erected for history, religion, culture and past war, however these aren't written in detail in the monuments. To get information on specific monument, manual search was required, i.e. asking related people or sources. Based on the problem, the development of an application which can utilize LBS (Location Based Service) method and some algorithmic methods specifically designed for mobile devices such as Smartphone, was required so that information on every monument in Manado can be displayed in detail using GPS coordinate. The application was developed by KNN method with K-means algorithm and collaborative filtering to recommend monument information to tourist. Tourists will get recommended options filtered by distance. Then, this method was also used to look for the closest monument from user. KNN algorithm determines the closest location by making comparisons according to calculation of longitude and latitude of several monuments tourist wants to visit. With this application, tourists who want to know and find information on monuments in Manado can do them easily and quickly because monument information is recommended directly to user without having to make selection. Moreover, tourist can see recommended monument information and search several monuments in Manado in real time.

  16. Limited angle C-arm tomosynthesis reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  17. Application of OMI, SCIAMACHY and GOME-2 Satellite SO2 Retrievals for Detection of Large Emission Sources

    NASA Technical Reports Server (NTRS)

    Fioletov, V.E.; McLinden, C. A.; Krotkov, N.; Yang, K.; Loyola, D. G.; Valks, P.; Theys, N.; Van Roozendael, M.; Nowlan, C. R.; Chance, K.; hide

    2013-01-01

    Retrievals of sulfur dioxide (SO2) from space-based spectrometers are in a relatively early stage of development. Factors such as interference between ozone and SO2 in the retrieval algorithms often lead to errors in the retrieved values. Measurements from the Ozone Monitoring Instrument (OMI), Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY), and Global Ozone Monitoring Experiment-2 (GOME-2) satellite sensors, averaged over a period of several years, were used to identify locations with elevated SO2 values and estimate their emission levels. About 30 such locations, detectable by all three sensors and linked to volcanic and anthropogenic sources, were found after applying low and high spatial frequency filtration designed to reduce noise and bias and to enhance weak signals to SO2 data from each instrument. Quantitatively, the mean amount of SO2 in the vicinity of the sources, estimated from the three instruments, is in general agreement. However, its better spatial resolution makes it possible for OMI to detect smaller sources and with additional detail as compared to the other two instruments. Over some regions of China, SCIAMACHY and GOME-2 data show mean SO2 values that are almost 1.5 times higher than those from OMI, but the suggested spatial filtration technique largely reconciles these differences.

  18. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  19. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  20. Searching Information Sources in Networks

    DTIC Science & Technology

    2017-06-14

    SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND

  1. ExoLocator--an online view into genetic makeup of vertebrate proteins.

    PubMed

    Khoo, Aik Aun; Ogrizek-Tomas, Mario; Bulovic, Ana; Korpar, Matija; Gürler, Ece; Slijepcevic, Ivan; Šikic, Mile; Mihalek, Ivana

    2014-01-01

    ExoLocator (http://exolocator.eopsf.org) collects in a single place information needed for comparative analysis of protein-coding exons from vertebrate species. The main source of data--the genomic sequences, and the existing exon and homology annotation--is the ENSEMBL database of completed vertebrate genomes. To these, ExoLocator adds the search for ostensibly missing exons in orthologous protein pairs across species, using an extensive computational pipeline to narrow down the search region for the candidate exons and find a suitable template in the other species, as well as state-of-the-art implementations of pairwise alignment algorithms. The resulting complements of exons are organized in a way currently unique to ExoLocator: multiple sequence alignments, both on the nucleotide and on the peptide levels, clearly indicating the exon boundaries. The alignments can be inspected in the web-embedded viewer, downloaded or used on the spot to produce an estimate of conservation within orthologous sets, or functional divergence across paralogues.

  2. Modelling human mobility patterns using photographic data shared online.

    PubMed

    Barchiesi, Daniele; Preis, Tobias; Bishop, Steven; Moat, Helen Susannah

    2015-08-01

    Humans are inherently mobile creatures. The way we move around our environment has consequences for a wide range of problems, including the design of efficient transportation systems and the planning of urban areas. Here, we gather data about the position in space and time of about 16 000 individuals who uploaded geo-tagged images from locations within the UK to the Flickr photo-sharing website. Inspired by the theory of Lévy flights, which has previously been used to describe the statistical properties of human mobility, we design a machine learning algorithm to infer the probability of finding people in geographical locations and the probability of movement between pairs of locations. Our findings are in general agreement with official figures in the UK and on travel flows between pairs of major cities, suggesting that online data sources may be used to quantify and model large-scale human mobility patterns.

  3. Modelling human mobility patterns using photographic data shared online

    PubMed Central

    Barchiesi, Daniele; Preis, Tobias; Bishop, Steven; Moat, Helen Susannah

    2015-01-01

    Humans are inherently mobile creatures. The way we move around our environment has consequences for a wide range of problems, including the design of efficient transportation systems and the planning of urban areas. Here, we gather data about the position in space and time of about 16 000 individuals who uploaded geo-tagged images from locations within the UK to the Flickr photo-sharing website. Inspired by the theory of Lévy flights, which has previously been used to describe the statistical properties of human mobility, we design a machine learning algorithm to infer the probability of finding people in geographical locations and the probability of movement between pairs of locations. Our findings are in general agreement with official figures in the UK and on travel flows between pairs of major cities, suggesting that online data sources may be used to quantify and model large-scale human mobility patterns. PMID:26361545

  4. Evaluation of airborne thermal infrared imagery for locating mine drainage sites in the Lower Kettle Creek and Cooks Run Basins, Pennsylvania, USA

    USGS Publications Warehouse

    Sams, James I.; Veloski, Garret

    2003-01-01

    High-resolution airborne thermal infrared (TIR) imagery data were collected over 90.6 km2 (35 mi2) of remote and rugged terrain in the Kettle Creek and Cooks Run Basins, tributaries of the West Branch of the Susquehanna River in north-central Pennsylvania. The purpose of this investigation was to evaluate the effectiveness of TIR for identifying sources of acid mine drainage (AMD) associated with abandoned coal mines. Coal mining from the late 1800s resulted in many AMD sources from abandoned mines in the area. However, very little detailed mine information was available, particularly on the source locations of AMD sites. Potential AMD sources were extracted from airborne TIR data employing custom image processing algorithms and GIS data analysis. Based on field reconnaissance of 103 TIR anomalies, 53 sites (51%) were classified as AMD. The AMD sources had low pH (<4) and elevated concentrations of iron and aluminum. Of the 53 sites, approximately 26 sites could be correlated with sites previously documented as AMD. The other 27 mine discharges identified in the TIR data were previously undocumented. This paper presents a summary of the procedures used to process the TIR data and extract potential mine drainage sites, methods used for field reconnaissance and verification of TIR data, and a brief summary of water-quality data.

  5. An EEG blind source separation algorithm based on a weak exclusion principle.

    PubMed

    Lan Ma; Blu, Thierry; Wang, William S-Y

    2016-08-01

    The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.

  6. Boundary and object detection in real world images. [by means of algorithms

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1974-01-01

    A solution to the problem of automatic location of objects in digital pictures by computer is presented. A self-scaling local edge detector which can be applied in parallel on a picture is described. Clustering algorithms and boundary following algorithms which are sequential in nature process the edge data to locate images of objects.

  7. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  8. Combined ICA-LORETA analysis of mismatch negativity.

    PubMed

    Marco-Pallarés, J; Grau, C; Ruffini, G

    2005-04-01

    A major challenge for neuroscience is to map accurately the spatiotemporal patterns of activity of the large neuronal populations that are believed to underlie computing in the human brain. To study a specific example, we selected the mismatch negativity (MMN) brain wave (an event-related potential, ERP) because it gives an electrophysiological index of a "primitive intelligence" capable of detecting changes, even abstract ones, in a regular auditory pattern. ERPs have a temporal resolution of milliseconds but appear to result from mixed neuronal contributions whose spatial location is not fully understood. Thus, it is important to separate these sources in space and time. To tackle this problem, a two-step approach was designed combining the independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. Here we implement this approach to analyze the subsecond spatiotemporal dynamics of MMN cerebral sources using trial-by-trial experimental data. We show evidence that a cerebral computation mechanism underlies MMN. This mechanism is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, which activate at distinct time intervals and are grouped in six main statistically independent components.

  9. Radio weak lensing shear measurement in the visibility domain - II. Source extraction

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.

    2018-05-01

    This paper extends the method introduced in Rivi et al. (2016b) to measure galaxy ellipticities in the visibility domain for radio weak lensing surveys. In that paper, we focused on the development and testing of the method for the simple case of individual galaxies located at the phase centre, and proposed to extend it to the realistic case of many sources in the field of view by isolating visibilities of each source with a faceting technique. In this second paper, we present a detailed algorithm for source extraction in the visibility domain and show its effectiveness as a function of the source number density by running simulations of SKA1-MID observations in the band 950-1150 MHz and comparing original and measured values of galaxies' ellipticities. Shear measurements from a realistic population of 104 galaxies randomly located in a field of view of 1 \\deg ^2 (i.e. the source density expected for the current radio weak lensing survey proposal with SKA1) are also performed. At SNR ≥ 10, the multiplicative bias is only a factor 1.5 worse than what found when analysing individual sources, and is still comparable to the bias values reported for similar measurement methods at optical wavelengths. The additive bias is unchanged from the case of individual sources, but it is significantly larger than typically found in optical surveys. This bias depends on the shape of the uv coverage and we suggest that a uv-plane weighting scheme to produce a more isotropic shape could reduce and control additive bias.

  10. Localization of Southern Resident Killer Whales Using Two Star Arrays to Support Marine Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Deng, Zhiqun; Carlson, Thomas J.

    2012-10-19

    Tidal power has been identified as one of the most potential commercial-scale renewable energy sources. Puget Sound, Washington, is a potential site to deploy tidal power generating devices. The risk of injury for killer whales needs to be managed before the deployment of these types of devices can be approved by regulating authorities. A passive acoustic system consisting of two star arrays, each with four hydrophones, was designed and implemented for the detection and localization of Southern Resident killer whales. Deployment of the passive acoustic system was conducted at Sequim Bay, Washington. A total of nine test locations were chosen,more » within a radius of 250 m around the star arrays, to test our localization approach. For the localization algorithm, a least square solver was applied to obtain a bearing location from each star array. The final source location was determined by the intersection of the bearings given by each of the two star arrays. Bearing and distance errors were obtained to conduct comparison between the calculated and true (from Global Positioning System) locations. The results indicated that bearing errors were within 1.04º for eight of the test locations; one location had bearing errors slightly larger than expected due to the strong background noise at that position. For the distance errors, six of the test locations were within the range of 1.91 to 32.36 m. The other two test locations were near the intersection line between the centers of the two star arrays, which were expected to have large errors from the theoretical sensitivity analysis performed.« less

  11. Improving image reconstruction of bioluminescence imaging using a priori information from ultrasound imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.

    2017-03-01

    Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.

  12. Application of a sparse representation method using K-SVD to data compression of experimental ambient vibration data for SHM

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Kiremidjian, Anne S.

    2011-04-01

    This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.

  13. Discovery of spatio-temporal patterns from location-based social networks

    NASA Astrophysics Data System (ADS)

    Béjar, J.; Álvarez, S.; García, D.; Gómez, I.; Oliva, L.; Tejeda, A.; Vázquez-Salceda, J.

    2016-03-01

    Location-based social networks (LBSNs) such as Twitter or Instagram are a good source for user spatio-temporal behaviour. These networks collect data from users in such a way that they can be seen as a set of collective and distributed sensors of a geographical area. A low rate sampling of user's location information can be obtained during large intervals of time that can be used to discover complex patterns, including mobility profiles, points of interest or unusual events. These patterns can be used as the elements of a knowledge base for different applications in different domains such as mobility route planning, touristic recommendation systems or city planning. The aim of this paper is twofold, first to analyse the frequent spatio-temporal patterns that users share when living and visiting a city. This behaviour is studied by means of frequent itemsets algorithms in order to establish some associations among visits that can be interpreted as interesting routes or spatio-temporal connections. Second, to analyse how the spatio-temporal behaviour of a large number of users can be segmented in different profiles. These behavioural profiles are obtained by means of clustering algorithms that show the different patterns of behaviour of visitors and citizens. The data analysed were obtained from the public data feeds of Twitter and Instagram within an area surrounding the cities of Barcelona and Milan for a period of several months. The analysis of these data shows that these kinds of algorithms can be successfully applied to data from any city (or general area) to discover useful patterns that can be interpreted on terms of singular places and areas and their temporal relationships.

  14. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  15. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  17. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  18. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE PAGES

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-03-24

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  19. Adapting an Ant Colony Metaphor for Multi-Robot Chemical Plume Tracing

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming

    2012-01-01

    We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments. PMID:22666056

  20. Adapting an ant colony metaphor for multi-robot chemical plume tracing.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming

    2012-01-01

    We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments.

  1. Validation of YCAR algorithm over East Asia TCCON sites

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, J.; Jung, Y.; Lee, H.; Goo, T. Y.; Cho, C. H.; Lee, S.

    2016-12-01

    In order to reduce the retrieval error of TANSO-FTS column averaged CO2 concentration (XCO2) induced by aerosol, we develop the Yonsei university CArbon Retrieval (YCAR) algorithm using aerosol information from TANSO-Cloud and Aerosol Imager (TANSO-CAI), providing simultaneous aerosol optical depth properties for the same geometry and optical path along with the FTS. Also we validate the retrieved results using ground-based TCCON measurement. Particularly this study first utilized the measurements at Anmyeondo, the only TCCON site located in South Korea, which can improve the quality of validation in East Asia. After the post screening process, YCAR algorithms have higher data availability by 33 - 85 % than other operational algorithms (NIES, ACOS, UoL). Although the YCAR algorithm has higher data availability, regression analysis with TCCON measurements are better or similar to other algorithms; Regression line of YCAR algorithm is close to linear identity function with RMSE of 2.05, bias of - 0.86 ppm. According to error analysis, retrieval error of YCAR algorithm is 1.394 - 1.478 ppm at East Asia. In addition, spatio-temporal sampling error of 0.324 - 0.358 ppm for each single sounding retrieval is also analyzed with Carbon Tracker - Asia data. These results of error analysis reveal the reliability and accuracy of latest version of our YCAR algorithm. Both XCO2 values retrieved using YCAR algorithm on TANSO-FTS and TCCON measurements show the consistent increasing trend about 2.3 - 2.6 ppm per year. Comparing to the increasing rate of global background CO2 amount measured in Mauna Loa, Hawaii (2 ppm per year), the increasing trend in East Asia shows about 30% higher trend due to the rapid increase of CO2 emission from the source region.

  2. Research to Operations: From Point Positions, Earthquake and Tsunami Modeling to GNSS-augmented Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    Stough, T.; Green, D. S.

    2017-12-01

    This collaborative research to operations demonstration brings together the data and algorithms from NASA research, technology, and applications-funded projects to deliver relevant data streams, algorithms, predictive models, and visualization tools to the NOAA National Tsunami Warning Center (NTWC) and Pacific Tsunami Warning Center (PTWC). Using real-time GNSS data and models in an operational environment, we will test and evaluate an augmented capability for tsunami early warning. Each of three research groups collect data from a selected network of real-time GNSS stations, exchange data consisting of independently processed 1 Hz station displacements, and merge the output into a single, more accurate and reliable set. The resulting merged data stream is delivered from three redundant locations to the TWCs with a latency of 5-10 seconds. Data from a number of seismogeodetic stations with collocated GPS and accelerometer instruments are processed for displacements and seismic velocities and also delivered. Algorithms for locating and determining the magnitude of earthquakes as well as algorithms that compute the source function of a potential tsunami using this new data stream are included in the demonstration. The delivered data, algorithms, models and tools are hosted on NOAA-operated machines at both warning centers, and, once tested, the results will be evaluated for utility in improving the speed and accuracy of tsunami warnings. This collaboration has the potential to dramatically improve the speed and accuracy of the TWCs local tsunami information over the current seismometer-only based methods. In our first year of this work, we have established and deployed an architecture for data movement and algorithm installation at the TWC's. We are addressing data quality issues and porting algorithms into the TWCs operating environment. Our initial module deliveries will focus on estimating moment magnitude (Mw) from Peak Ground Displacement (PGD), within 2-3 minutes of the event, and coseismic displacements converging to static offsets. We will also develop visualizations of module outputs tailored to the operational environment. In the context of this work, we will also discuss this research to operations approach and other opportunities within the NASA Applied Science Disaster Program.

  3. Orbital navigation, docking and obstacle avoidance as a form of three dimensional model-based image understanding

    NASA Technical Reports Server (NTRS)

    Beyer, J.; Jacobus, C.; Mitchell, B.

    1987-01-01

    Range imagery from a laser scanner can be used to provide sufficient information for docking and obstacle avoidance procedures to be performed automatically. Three dimensional model-based computer vision algorithms in development can perform these tasks even with targets which may not be cooperative (that is, objects without special targets or markers to provide unambiguous location points). Roll, pitch and yaw of the vehicle can be taken into account as image scanning takes place, so that these can be corrected when the image is converted from egocentric to world coordinates. Other attributes of the sensor, such as the registered reflectence and texture channels, provide additional data sources for algorithm robustness. Temporal fusion of sensor immages can take place in the work coordinate domain, allowing for the building of complex maps in three dimensional space.

  4. Imaging and detection of mines from acoustic measurements

    NASA Astrophysics Data System (ADS)

    Witten, Alan J.; DiMarzio, Charles A.; Li, Wen; McKnight, Stephen W.

    1999-08-01

    A laboratory-scale acoustic experiment is described where a buried target, a hockey puck cut in half, is shallowly buried in a sand box. To avoid the need for source and receiver coupling to the host sand, an acoustic wave is generated in the subsurface by a pulsed laser suspended above the air-sand interface. Similarly, an airborne microphone is suspended above this interface and moved in unison with the laser. After some pre-processing of the data, reflections for the target, although weak, could clearly be identified. While the existence and location of the target can be determined by inspection of the data, its unique shape can not. Since target discrimination is important in mine detection, a 3D imaging algorithm was applied to the acquired acoustic data. This algorithm yielded a reconstructed image where the shape of the target was resolved.

  5. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.

  6. Enhancing weak transient signals in SEVIRI false colour imagery: application to dust source detection in southern Africa

    NASA Astrophysics Data System (ADS)

    Murray, Jon E.; Brindley, Helen E.; Bryant, Robert G.; Russell, Jacqui E.; Jenkins, Katherine F.

    2013-04-01

    Understanding the processes governing the availability and entrainment of mineral dust into the atmosphere requires dust sources to be identified and the evolution of dust events to be monitored. To achieve this aim a wide range of approaches have been developed utilising observations from a variety of different satellite sensors. Global maps of source regions and their relative strengths have been derived from instruments in low Earth orbit (e.g. Total Ozone Monitoring Spectrometer (TOMS) (Prospero et al., 2002), MODerate resolution Imaging Spectrometer (MODIS) (Ginoux et al., 2012)). Instruments such as MODIS can also be used to improve precise source location (Baddock et al., 2009) but the information available is restricted to the satellite overpass times which may not be coincident with active dust emission from the source. Hence, at a regional scale, some of the more successful approaches used to characterise the activity of different sources use high temporal resolution data available from instruments in geostationary orbit. For example, the widely used red-green-blue (RGB) dust scheme developed by Lensky and Rosenfeld (2008) (hereafter LR2008) makes use of observations from selected thermal channels of the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) in a false colour rendering scheme in which dust appears pink. This scheme has provided the basis for numerous studies of north African dust sources and factors governing their activation (e.g. Schepanski et al., 2007, 2009, 2012). However, the LR2008 imagery can fail to identify dust events due to the effects of atmospheric moisture, variations in dust layer height and optical properties, and surface conditions (Brindley et al., 2012). Here we introduce a new method designed to circumvent some of these issues and enhance the signature of dust events using observations from SEVIRI. The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time-step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with low levels of dust emission. Different channel combinations are then rendered in false colour imagery to better identify dust source locations and activity. We have applied this new clear-sky difference (CSD) algorithm over three key source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case studies indicate that advantages associated with the CSD approach include an improved ability to detect dust and distinguish multiple sources, the observation of source activation earlier in the diurnal cycle, and an improved ability to pinpoint dust source locations. These advantages are confirmed by a survey of four-years of data, comparing the results obtained using the CSD technique with those derived from LR2008 dust imagery. On average the new algorithm more than doubles the number of dust events identified, with the greatest improvement for the Makgadigkadi Basin and coastal regions. We anticipate exploiting this new activation record derived using the CSD approach to better understand the surface and meteorological conditions controlling dust uplift and subsequent atmospheric transport.

  7. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  8. Research on fully distributed optical fiber sensing security system localization algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  9. The detection error of thermal test low-frequency cable based on M sequence correlation algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin

    2018-04-01

    The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.

  10. MECH: Algorithms and Tools for Automated Assessment of Potential Attack Locations (Software User Guide)

    DTIC Science & Technology

    2015-10-02

    hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...architecture of MECH v0.1 is shown in Figure 1. The Android MECH-App shown on the left side of the figure is for end users to request tactical risk...when or where the next attack will take place.   3    2 MECH-App MECH-App runs on a touch screen based Android device for end users to access the

  11. The development of efficient numerical time-domain modeling methods for geophysical wave propagation

    NASA Astrophysics Data System (ADS)

    Zhu, Lieyuan

    This Ph.D. dissertation focuses on the numerical simulation of geophysical wave propagation in the time domain including elastic waves in solid media, the acoustic waves in fluid media, and the electromagnetic waves in dielectric media. This thesis shows that a linear system model can describe accurately the physical processes of those geophysical waves' propagation and can be used as a sound basis for modeling geophysical wave propagation phenomena. The generalized stability condition for numerical modeling of wave propagation is therefore discussed in the context of linear system theory. The efficiency of a series of different numerical algorithms in the time-domain for modeling geophysical wave propagation are discussed and compared. These algorithms include the finite-difference time-domain method, pseudospectral time domain method, alternating directional implicit (ADI) finite-difference time domain method. The advantages and disadvantages of these numerical methods are discussed and the specific stability condition for each modeling scheme is carefully derived in the context of the linear system theory. Based on the review and discussion of these existing approaches, the split step, ADI pseudospectral time domain (SS-ADI-PSTD) method is developed and tested for several cases. Moreover, the state-of-the-art stretched-coordinate perfect matched layer (SCPML) has also been implemented in SS-ADI-PSTD algorithm as the absorbing boundary condition for truncating the computational domain and absorbing the artificial reflection from the domain boundaries. After algorithmic development, a few case studies serve as the real-world examples to verify the capacities of the numerical algorithms and understand the capabilities and limitations of geophysical methods for detection of subsurface contamination. The first case is a study using ground penetrating radar (GPR) amplitude variation with offset (AVO) for subsurface non-aqueous-liquid (NAPL) contamination. The numerical AVO study reveals that the normalized residual polarization (NRP) variation with offset does not respond to subsurface NAPL existence when the offset is close to or larger than its critical value (which corresponds to critical incident angle) because the air and head waves dominate the recorded wave field and severely interfere with reflected waves in the TEz wave field. Thus it can be concluded that the NRP AVO/GPR method is invalid when source-receiver angle offset is close to or greater than its critical value due to incomplete and severely distorted reflection information. In other words, AVO is not a promising technique for detection of the subsurface NAPL, as claimed by some researchers. In addition, the robustness of the newly developed numerical algorithms is also verified by the AVO study for randomly-arranged layered media. Meanwhile, this case study also demonstrates again that the full-wave numerical modeling algorithms are superior to ray tracing method. The second case study focuses on the effect of the existence of a near-surface fault on the vertically incident P- and S- plane waves. The modeling results show that both P-wave vertical incidence and S-wave vertical incidence cases are qualified fault indicators. For the plane S-wave vertical incidence case, the horizontal location of the upper tip of the fault (the footwall side) can be identified without much effort, because all the recorded parameters on the surface including the maximum velocities and the maximum accelerations, and even their ratios H/V, have shown dramatic changes when crossing the upper tip of the fault. The centers of the transition zone of the all the curves of parameters are almost directly above the fault tip (roughly the horizontal center of the model). Compared with the case of the vertically incident P-wave source, it has been found that the S-wave vertical source is a better indicator for fault location, because the horizontal location of the tip of that fault cannot be clearly identified with the ratio of the horizontal to vertical velocity for the P-wave incident case.

  12. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  13. Analysis of source regions and meteorological factors for the variability of spring PM10 concentrations in Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Lee, Jangho; Kim, Kwang-Yul

    2018-02-01

    CSEOF analysis is applied for the springtime (March, April, May) daily PM10 concentrations measured at 23 Ministry of Environment stations in Seoul, Korea for the period of 2003-2012. Six meteorological variables at 12 pressure levels are also acquired from the ERA Interim reanalysis datasets. CSEOF analysis is conducted for each meteorological variable over East Asia. Regression analysis is conducted in CSEOF space between the PM10 concentrations and individual meteorological variables to identify associated atmospheric conditions for each CSEOF mode. By adding the regressed loading vectors with the mean meteorological fields, the daily atmospheric conditions are obtained for the first five CSEOF modes. Then, HYSPLIT model is run with the atmospheric conditions for each CSEOF mode in order to back trace the air parcels and dust reaching Seoul. The K-means clustering algorithm is applied to identify major source regions for each CSEOF mode of the PM10 concentrations in Seoul. Three main source regions identified based on the mean fields are: (1) northern Taklamakan Desert (NTD), (2) Gobi Desert and (GD), and (3) East China industrial area (ECI). The main source regions for the mean meteorological fields are consistent with those of previous study; 41% of the source locations are located in GD followed by ECI (37%) and NTD (21%). Back trajectory calculations based on CSEOF analysis of meteorological variables identify distinct source characteristics associated with each CSEOF mode and greatly facilitate the interpretation of the PM10 variability in Seoul in terms of transportation route and meteorological conditions including the source area.

  14. The Chandra Source Catalog 2.0: the Galactic center region

    NASA Astrophysics Data System (ADS)

    Civano, Francesca Maria; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The second release of the Chandra Source Catalog (CSC 2.0) comprises all the 10,382 ACIS and HRC-I imaging observations taken by Chandra and released publicly through the end of 2014. Among these, 534 single observations surrounding the Galactic center are included, covering a total area of ~19deg2 and a total exposure time of ~9 Ms.The single 534 observations were merged into 379 stacks (overlapping observations with aim-points within 60") to increase the flux limit for source detection purposes.Thanks to the combination of the point source detection algorithm with the maximum likelihood technique used to asses the source significance, ~21,000 detections are listed in the CSC 2.0 for this field only, 80% of which are unique sources. The central region of this field around the SgrA* location has the deepest exposure of 2.2 Ms and the highest source density with ~5000 sources. In this poster, we present details about this region including source distribution and density, coverage, exposure.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the ChandraX-ray Center.

  15. Geographically Modified PageRank Algorithms: Identifying the Spatial Concentration of Human Movement in a Geospatial Network.

    PubMed

    Chin, Wei-Chien-Benny; Wen, Tzai-Hung

    2015-01-01

    A network approach, which simplifies geographic settings as a form of nodes and links, emphasizes the connectivity and relationships of spatial features. Topological networks of spatial features are used to explore geographical connectivity and structures. The PageRank algorithm, a network metric, is often used to help identify important locations where people or automobiles concentrate in the geographical literature. However, geographic considerations, including proximity and location attractiveness, are ignored in most network metrics. The objective of the present study is to propose two geographically modified PageRank algorithms-Distance-Decay PageRank (DDPR) and Geographical PageRank (GPR)-that incorporate geographic considerations into PageRank algorithms to identify the spatial concentration of human movement in a geospatial network. Our findings indicate that in both intercity and within-city settings the proposed algorithms more effectively capture the spatial locations where people reside than traditional commonly-used network metrics. In comparing location attractiveness and distance decay, we conclude that the concentration of human movement is largely determined by the distance decay. This implies that geographic proximity remains a key factor in human mobility.

  16. Highlighted Steps of the Management Algorithm in Acute Lower Gastrointestinal Bleeding - Case Reports and Literature Review.

    PubMed

    Andrei, Gabriel Nicolae; Popa, Bogdan; Gulie, Laurentiu; Diaconescu, Bogdan Ionut; Martian, Bogdan Valeriu; Bejenaru, Mircea; Beuran, Mircea

    2016-01-01

    Acute lower gastrointestinal bleeding is a major problem worldwide, being a rare and life threatening condition, with a mortality rate situated between 2 and 4%. Acute lower gastrointestinal bleeding is solvent for 1 - 2% of the entire hospital emergencies, 15% presenting as massive bleeding and up to 5% requiring surgery. Lower gastrointestinal bleeding can be classified depending on their location in the small or large intestine. The small bowel is the rarest site of lower gastrointestinal bleeding, at the same time being the commonest cause of obscure bleeding. 5% of total lower GI bleeding appears in the small bowel. When endoscopic therapy associated with medical treatment are insufficient, endovascular intervention can be lifesaving. Unfortunately in some rare cases of acute lower gastrointestinal bleeding with hemo-dynamic instability and the angiography performed being unable to locate the source of bleeding, the last therapeutic resource remains surgery. In the following we exemplify two cases of acute lower gastrointestinal bleeding which were resolved in different ways, followed by a thorough description of the different types of available treatment and finally, in the conclusions, we systematize the most important stages of the management algorithm in acute lower gastrointestinal bleeding. Celsius.

  17. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  18. Monitoring performance for hydraulic fracturing using synthetic microseismic catalogue at the Wysin site (Poland)

    NASA Astrophysics Data System (ADS)

    Ángel López Comino, José; Cesca, Simone; Kriegerowski, Marius; Heimann, Sebastian; Dahm, Torsten; Mirek, Janusz; Lasocky, Stanislaw

    2017-04-01

    Previous analysis to assess the monitoring performance of a dedicated seismic network are always useful to determine its capability of detecting, locating and characterizing target seismicity. This work focuses on a hydrofracking experiment in Poland, which is monitored in the framework of the SHEER (SHale gas Exploration and Exploitation induced Risks) EU project. The seismic installation is located near Wysin (Poland), in the central-western part of the Peribaltic synclise at Pomerania. The network setup includes a distributed network of six broadband stations, three shallow borehole stations and three small-scale arrays. We assess the monitoring performance prior operations, using synthetic seismograms. Realistic full waveform are generated and combined with real noise before fracking operations, to produce either event based or continuous synthetic waveforms. Background seismicity is modelled by double couple (DC) focal mechanisms. Non-DC sources resemble induced tensile fractures opening in the direction of the minimal compressive stress and closing in the same direction after the injection. Microseismic sources are combined with a realistic crustal model, distribution of hypocenters, magnitudes and source durations. The network detection performance is then assessed in terms of Magnitude of Completeness (Mc) through two different techniques: i) using an amplitude threshold approach, taking into account a station dependent noise level and different values of signal-to-noise ratio (SNR) and ii) through the application of an automatic detection algorithm to the continuous synthetic dataset. In the first case, we compare the maximal amplitude of noise free synthetic waveforms with the different noise levels. Imposing the simultaneous detection at e.g. 4 stations for a robust detection, the Mc is assessed and can be adjusted by empirical relationships for different SNR values. We find that different source mechanisms have different detection threshold. The background seismicity (DC sources) is better detectable than induced earthquakes (tensile cracks mechanisms). Assuming a SNR of 2, we estimate a Mc 0.55 around the fracking wells, with an increase of 0.05 during day hours. The value of Mc can be decreased to 0.45 around the fracking region, taking advantage by the array installations. The second approach applies a full waveform detection and location algorithm based on the stacking of smooth characteristic function and the identification of high coherence in the signals recorded at different stations. In this case the detection can be increased at the cost of increasing also false detections, with an acceptable compromise found for Mc 0.1.

  19. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  20. Assessing the population coverage of a health demographic surveillance system using satellite imagery and crowd-sourcing.

    PubMed

    Di Pasquale, Aurelio; McCann, Robert S; Maire, Nicolas

    2017-01-01

    Remotely sensed data can serve as an independent source of information about the location of residential structures in areas under demographic and health surveillance. We report on results obtained combining satellite imagery, imported from Bing, with location data routinely collected using the built-in GPS sensors of tablet computers, to assess completeness of population coverage in a Health and Demographic Surveillance System in Malawi. The Majete Malaria Project Health and Demographic Surveillance System, in Malawi, started in 2014 to support a project with the aim of studying the reduction of malaria using an integrated control approach by rolling out insecticide treated nets and improved case management supplemented with house improvement and larval source management. In order to support the monitoring of the trial a Health and Demographic Surveillance System was established in the area that surrounds the Majete Wildlife Reserve (1600 km2), using the OpenHDS data system. We compared house locations obtained using GPS recordings on mobile devices during the demographic surveillance census round with those acquired from satellite imagery. Volunteers were recruited through the crowdcrafting.org platform to identify building structures on the images, which enabled the compilation of a database with coordinates of potential residences. For every building identified on these satellite images by the volunteers (11,046 buildings identified of which 3424 (ca. 30%) were part of the censused area), we calculated the distance to the nearest house enumerated on the ground by fieldworkers during the census round of the HDSS. A random sample of buildings (85 structures) identified on satellite images without a nearby location enrolled in the census were visited by a fieldworker to determine how many were missed during the baseline census survey, if any were missed. The findings from this ground-truthing effort suggest that a high population coverage was achieved in the census survey, however the crowd-sourcing did not locate many of the inhabited structures (52.3% of the 6543 recorded during the census round). We conclude that using auxiliary data can play a useful role in quality assurance in population based health surveillance, but improved algorithms would be needed if crowd-sourced house locations are to be used as the basis of population databases.

  1. Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review.

    PubMed

    Pu, Yi; Cheyne, Douglas O; Cornwell, Brian R; Johnson, Blake W

    2018-01-01

    Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG "deep source imaging" of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations.

  2. Equivalent radiation source of 3D package for electromagnetic characteristics analysis

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wei, Xingchang; Shu, Yufei

    2017-10-01

    An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).

  3. Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review

    PubMed Central

    Pu, Yi; Cheyne, Douglas O.; Cornwell, Brian R.; Johnson, Blake W.

    2018-01-01

    Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG “deep source imaging” of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations. PMID:29755314

  4. The design and realization of a three-dimensional video system by means of a CCD array

    NASA Astrophysics Data System (ADS)

    Boizard, J. L.

    1985-12-01

    Design features and principles and initial tests of a prototype three-dimensional robot vision system based on a laser source and a CCD detector array is described. The use of a laser as a coherent illumination source permits the determination of the relief using one emitter since the location of the source is a known quantity with low distortion. The CCD signal detector array furnishes an acceptable signal/noise ratio and, when wired to an appropriate signal processing system, furnishes real-time data on the return signals, i.e., the characteristic points of an object being scanned. Signal processing involves integration of 29 kB of data per 100 samples, with sampling occurring at a rate of 5 MHz (the CCDs) and yielding an image every 12 msec. Algorithms for filtering errors from the data stream are discussed.

  5. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  6. A firefly algorithm for solving competitive location-design problem: a case study

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad

    2016-12-01

    This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.

  7. An ensemble-based algorithm for optimizing the configuration of an in situ soil moisture monitoring network

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, Niels; Verhoest, Niko E. C.; Gobeyn, Sacha; De Baets, Bernard; Verwaeren, Jan; Pauwels, Valentijn R. N.

    2015-04-01

    The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological modeling. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the smaller temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typically small integration volume of in situ measurements, and the often large spacing between monitoring locations. This causes only a small part of the modeling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically non-dynamic in time. Generally, e.g. when applying data assimilation, maximizing the observed information under given circumstances will lead to a better qualitative and quantitative insight of the hydrological system. It is therefore advisable to perform a prior analysis in order to select those monitoring locations which are most predictive for the unobserved modeling domain. This research focuses on optimizing the configuration of a soil moisture monitoring network in the catchment of the Bellebeek, situated in Belgium. A recursive algorithm, strongly linked to the equations of the Ensemble Kalman Filter, has been developed to select the most predictive locations in the catchment. The basic idea behind the algorithm is twofold. On the one hand a minimization of the modeled soil moisture ensemble error covariance between the different monitoring locations is intended. This causes the monitoring locations to be as independent as possible regarding the modeled soil moisture dynamics. On the other hand, the modeled soil moisture ensemble error covariance between the monitoring locations and the unobserved modeling domain is maximized. The latter causes a selection of monitoring locations which are more predictive towards unobserved locations. The main factors that will influence the outcome of the algorithm are the following: the choice of the hydrological model, the uncertainty model applied for ensemble generation, the general wetness of the catchment during which the error covariance is computed, etc. In this research the influence of the latter two is examined more in-depth. Furthermore, the optimal network configuration resulting from the newly developed algorithm is compared to network configurations obtained by two other algorithms. The first algorithm is based on a temporal stability analysis of the modeled soil moisture in order to identify catchment representative monitoring locations with regard to average conditions. The second algorithm involves the clustering of available spatially distributed data (e.g. land cover and soil maps) that is not obtained by hydrological modeling.

  8. Classifying seismic noise and sources from OBS data using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Mosher, S. G.; Audet, P.

    2017-12-01

    The paradigm of plate tectonics was established mainly by recognizing the central role of oceanic plates in the production and destruction of tectonic plates at their boundaries. Since that realization, however, seismic studies of tectonic plates and their associated deformation have slowly shifted their attention toward continental plates due to the ease of installation and maintenance of high-quality seismic networks on land. The result has been a much more detailed understanding of the seismicity patterns associated with continental plate deformation in comparison with the low-magnitude deformation patterns within oceanic plates and at their boundaries. While the number of high-quality ocean-bottom seismometer (OBS) deployments within the past decade has demonstrated the potential to significantly increase our understanding of tectonic systems in oceanic settings, OBS data poses significant challenges to many of the traditional data processing techniques in seismology. In particular, problems involving the detection, location, and classification of seismic sources occurring within oceanic settings are much more difficult due to the extremely noisy seafloor environment in which data are recorded. However, classifying data without a priori constraints is a problem that is routinely pursued via unsupervised machine learning algorithms, which remain robust even in cases involving complicated datasets. In this research, we apply simple unsupervised machine learning algorithms (e.g., clustering) to OBS data from the Cascadia Initiative in an attempt to classify and detect a broad range of seismic sources, including various noise sources and tremor signals occurring within ocean settings.

  9. A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.

    PubMed

    Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie

    2016-09-01

    Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.

  10. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  11. A rapid application of GA-MODFLOW combined approach to optimization of well placement and operation for drought-ready groundwater reservoir design

    NASA Astrophysics Data System (ADS)

    Park, C.; Kim, Y.; Jang, H.

    2016-12-01

    Poor temporal distribution of precipitation increases winter drought risks in mountain valley areas in Korea. Since perennial streams or reservoirs for water use are rare in the areas, groundwater is usually a major water resource. Significant amount of the precipitation contributing groundwater recharge mostly occurs during the summer season. However, a volume of groundwater recharge is limited by rapid runoff because of the topographic characteristics such as steep hill and slope. A groundwater reservoir using artificial recharge method with rain water reuse can be a suitable solution to secure water resource for the mountain valley areas. Successful groundwater reservoir design depends on optimization of well placement and operation. This study introduces a combined approach using GA (Genetic Algorithm) and MODFLOW and its rapid application. The methodology is based on RAD (Rapid Application Development) concept in order to minimize the cost of implementation. DEAP (Distributed Evolutionary Algorithms in Python), a framework for prototyping and testing evolutionary algorithms, is applied for quick code development and CUDA (Compute Unified Device Architecture), a parallel computing platform using GPU (Graphics Processing Unit), is introduced to reduce runtime. The application was successfully applied to Samdeok-ri, Gosung, Korea. The site is located in a mountain valley area and unconfined aquifers are major source of water use. The results of the application produced the best location and optimized operation schedule of wells including pumping and injecting.

  12. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.

    PubMed

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-10-27

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.

  13. shiftNMFk 1.1: Robust Nonnegative matrix factorization with kmeans clustering and signal shift, for allocation of unknown physical sources, toy version for open sourcing with publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.

    This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less

  14. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments

    PubMed Central

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-01-01

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071

  15. Analytical investigation of adaptive control of radiated inlet noise from turbofan engines

    NASA Technical Reports Server (NTRS)

    Risi, John D.; Burdisso, Ricardo A.

    1994-01-01

    An analytical model has been developed to predict the resulting far field radiation from a turbofan engine inlet. A feedforward control algorithm was simulated to predict the controlled far field radiation from the destructive combination of fan noise and secondary control sources. Numerical results were developed for two system configurations, with the resulting controlled far field radiation patterns showing varying degrees of attenuation and spillover. With one axial station of twelve control sources and error sensors with equal relative angular positions, nearly global attenuation is achieved. Shifting the angular position of one error sensor resulted in an increase of spillover to the extreme sidelines. The complex control inputs for each configuration was investigated to identify the structure of the wave pattern created by the control sources, giving an indication of performance of the system configuration. It is deduced that the locations of the error sensors and the control source configuration are equally critical to the operation of the active noise control system.

  16. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  17. New numerical approaches for modeling thermochemical convection in a compositionally stratified fluid

    NASA Astrophysics Data System (ADS)

    Puckett, Elbridge Gerry; Turcotte, Donald L.; He, Ying; Lokavarapu, Harsha; Robey, Jonathan M.; Kellogg, Louise H.

    2018-03-01

    Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions. Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB. These regions have been associated with lower mantle structures known as large low shear velocity provinces (LLSVPS) below Africa and the South Pacific. The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves. Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle. Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods. Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or 'artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method that carries a scalar quantity representing the location of each compositional field. All four algorithms are implemented in the open source finite element code ASPECT, which we use to compute the velocity, pressure, and temperature associated with the underlying flow field. We compare the performance of these four algorithms on three problems, including computing an approximation to the solution of an initially compositionally stratified fluid at Ra =105 with buoyancy numbers B that vary from no stratification at B = 0 to stratified flow at large B .

  18. Global dust sources detection using MODIS Deep Blue Collection 6 aerosol products

    NASA Astrophysics Data System (ADS)

    Pérez García-Pando, C.; Ginoux, P. A.

    2015-12-01

    Our understanding of the global dust cycle is limited by a dearth of information about dust sources, especially small-scale features which could account for a large fraction of global emissions. Remote sensing sensors are the most useful tool to locate dust sources. These sensors include microwaves, visible channels, and lidar. On the global scale, major dust source regions have been identified using polar orbiting satellite instruments. The MODIS Deep Blue algorithm has been particularly useful to detect small-scale sources such as floodplains, alluvial fans, rivers, and wadis , as well as to identify anthropogenic sources from agriculture. The recent release of Collection 6 MODIS aerosol products allows to extend dust source detection to the entire land surfaces, which is quite useful to identify mid to high latitude dust sources and detect not only dust from agriculture but fugitive dust from transport and industrial activities. This presentation will overview the advantages and drawbacks of using MODIS Deep Blue for dust detection, compare to other instruments (polar orbiting and geostationary). The results of Collection 6 with a new dust screening will be compared against AERONET. Applications to long range transport of anthropogenic dust will be presented.

  19. An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.

    PubMed

    Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu

    2017-08-05

    The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K -Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.

  20. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  1. Logistics Distribution Center Location Evaluation Based on Genetic Algorithm and Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Shao, Yuxiang; Chen, Qing; Wei, Zhenhua

    Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.

  2. A new peak detection algorithm for MALDI mass spectrometry data based on a modified Asymmetric Pseudo-Voigt model.

    PubMed

    Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Roessner, Ute; Halgamuge, Saman K

    2015-01-01

    Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net.

  3. A new peak detection algorithm for MALDI mass spectrometry data based on a modified Asymmetric Pseudo-Voigt model

    PubMed Central

    2015-01-01

    Background Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Results Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. Conclusions The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net. PMID:26680279

  4. A Bayesian Approach to Real-Time Earthquake Phase Association

    NASA Astrophysics Data System (ADS)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  5. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  6. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  7. Geoacoustic inversion of a shallow fresh-water environment

    NASA Astrophysics Data System (ADS)

    Stotts, Steven A.; Knobles, David P.; Koch, Robert A.; Piper, James N.; Keller, Jason A.

    2003-10-01

    A recent experiment was conducted at The University of Texas/Applied Research Laboratories test station located at Lake Travis, Austin, TX. Implosive (light bulb), explosive (firecracker), and tonal sources were recorded on a dual receiver system located on the bottom next to a range-independent underwater river channel. Inversion results of the broadband time series obtained over ranges less than 1.5 km were used to predict measured transmission loss at several tonal frequencies in the band from 250-1000 Hz. The average water depth was approximately 38 m along the channel during the experiment. Sound speed profiles were calculated from recorded temperature readings measured as a function of depth. Implosive source spectrums were measured and used to evaluate a model/data correlation cost function in a simulated annealing algorithm. Comparisons of inversion results using both a normal mode and a ray-based plane wave reflection coefficient forward model [Stotts et al., J. Acoust. Soc. Am. (submitted)] are discussed. Predicted transmission loss based on the inversion results are compared to the measured transmission loss. Differences between fluid and elastic layer bottom models will also be presented.

  8. A radiation and energy budget algorithm for forest canopies

    NASA Astrophysics Data System (ADS)

    Tunick, A.

    2006-01-01

    Previously, it was shown that a one-dimensional, physics-based (conservation-law) computer model can provide a useful mathematical representation of the wind flow, temperatures, and turbulence inside and above a uniform forest stand. A key element of this calculation was a radiation and energy budget algorithm (implemented to predict the heat source). However, to keep the earlier publication brief, a full description of the radiation and energy budget algorithm was not given. Hence, this paper presents our equation set for calculating the incoming total radiation at the canopy top as well as the transmission, reflection, absorption, and emission of the solar flux through a forest stand. In addition, example model output is presented from three interesting numerical experiments, which were conducted to simulate the canopy microclimate for a forest stand that borders the Blossom Point Field Test Facility (located near La Plata, Maryland along the Potomac River). It is anticipated that the current numerical study will be useful to researchers and experimental planners who will be collecting acoustic and meteorological data at the Blossom Point Facility in the near future.

  9. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Elgindy, Tarek; Hodge, Brian S

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmosphericmore » measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.« less

  10. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  11. The spectral positioning algorithm of new spectrum vehicle based on convex programming in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Lu, Zhixin

    2017-10-01

    Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.

  12. Investigation of the optimum location of external markers for patient setup accuracy enhancement at external beam radiotherapy

    PubMed Central

    Torshabi, Ahmad Esmaili; Nankali, Saber

    2016-01-01

    In external beam radiotherapy, one of the most common and reliable methods for patient geometrical setup and/or predicting the tumor location is use of external markers. In this study, the main challenging issue is increasing the accuracy of patient setup by investigating external markers location. Since the location of each external marker may yield different patient setup accuracy, it is important to assess different locations of external markers using appropriate selective algorithms. To do this, two commercially available algorithms entitled a) canonical correlation analysis (CCA) and b) principal component analysis (PCA) were proposed as input selection algorithms. They work on the basis of maximum correlation coefficient and minimum variance between given datasets. The proposed input selection algorithms work in combination with an adaptive neuro‐fuzzy inference system (ANFIS) as a correlation model to give patient positioning information as output. Our proposed algorithms provide input file of ANFIS correlation model accurately. The required dataset for this study was prepared by means of a NURBS‐based 4D XCAT anthropomorphic phantom that can model the shape and structure of complex organs in human body along with motion information of dynamic organs. Moreover, a database of four real patients undergoing radiation therapy for lung cancers was utilized in this study for validation of proposed strategy. Final analyzed results demonstrate that input selection algorithms can reasonably select specific external markers from those areas of the thorax region where root mean square error (RMSE) of ANFIS model has minimum values at that given area. It is also found that the selected marker locations lie closely in those areas where surface point motion has a large amplitude and a high correlation. PACS number(s): 87.55.km, 87.55.N PMID:27929479

  13. Correlating DMSP and NOAA Ion Precipitation Observations with Low Altitude ENA Emissions During the Declining Phase of Solar Cycle 23

    NASA Astrophysics Data System (ADS)

    Mackler, D. A.; Jahn, J. M.; Perez, J. D.; Pollock, C. J.

    2014-12-01

    Plasma sheet particles with sufficiently low mirror points will interact with thermospheric neutrals through charge exchange. The resulting ENAs are no longer magnetically bound and can therefore be detected by remote platforms outside the ionosphere/lower atmosphere. These ENAs closely associated with ion precipitation are termed Low Altitude Emissions (LAEs). They are non-isotropic in velocity space and mimic the corresponding ion pitch angle distribution. In this study we present a statistical correlation between remote observations of the LAE emission characteristics and ion precipitation maps determined in situ over the declining phase of solar cycle 23 (2000-2005). We discuss the strength and derived location (MLT, iMLAT) of LAEs as a function of geomagnetic activity levels in relation to the simultaneously measured strength, location, and spectral characteristics of in situ ion precipitation. These comparisons may allow us to use ENA images to assess where and how much energy is deposited during any type of enhanced geomagnetic activity. The precipitating ion differential directional flux maps are built up from combining NOAA-14/15/16 TED and DMSP-13/14/15 SSJ4 data. Low altitude ENA source locations are identified algorithmically using IMAGE/MENA images. ENA flux maps are derived by computing the LAE source locations assuming an ENA emission altitude (h) of 650 km, then projecting each image pixel onto a sphere with radius Re+h to determine the local time and latitude extent of the ENA source. The IGRF magnetic field model is used in combination with the Solar Magnetic coordinates of LAE pixels to compute the pitch angle of the escaping neutrals (previously ion before charge exchanging). Pitch angles larger than 90° will have a mirror point further into the atmosphere than the assumed emission altitude.

  14. Optimization of a large-scale microseismic monitoring network in northern Switzerland

    NASA Astrophysics Data System (ADS)

    Kraft, Toni; Mignan, Arnaud; Giardini, Domenico

    2013-10-01

    We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc = 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to: calculate traveltimes of seismic body waves using a finite difference ray tracer and the 3-D velocity model of Switzerland, calculate seismic body-wave amplitudes at arbitrary stations assuming the Brune source model and using scaling and attenuation relations recently derived for Switzerland, and estimate the noise level at arbitrary locations within Switzerland using a first-order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan et al. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considered.

  15. A Lightning Channel Retrieval Algorithm for the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, William; Arnold, James E. (Technical Monitor)

    2002-01-01

    A new multi-station VHF time-of-arrival (TOA) antenna network is, at the time of this writing, coming on-line in Northern Alabama. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The network will support on-going ground validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. It will also provide for many interesting and detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and will offer many interesting comparisons with other meteorological/geophysical wets associated with lightning and thunderstorms. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. In this study, a new revised channel mapping retrieval algorithm is introduced. The algorithm is an extension of earlier work provided in Koshak and Solakiewicz (1996) in the analysis of the NASA Kennedy Space Center (KSC) Lightning Detection and Ranging (LDAR) system. As in the 1996 study, direct algebraic solutions are obtained by inverting a simple linear system of equations, thereby making computer searches through a multi-dimensional parameter domain of a Chi-Squared function unnecessary. However, the new algorithm is developed completely in spherical Earth-centered coordinates (longitude, latitude, altitude), rather than in the (x, y, z) cartesian coordinates employed in the 1996 study. Hence, no mathematical transformations from (x, y, z) into spherical coordinates are required (such transformations involve more numerical error propagation, more computer program coding, and slightly more CPU computing time). The new algorithm also has a more realistic definition of source altitude that accounts for Earth oblateness (this can become important for sources that are hundreds of kilometers away from the network). In addition, the new algorithm is being applied to analyze computer simulated LMA datasets in order to obtain detailed location/time retrieval error maps for sources in and around the LMA network. These maps will provide a more comprehensive analysis of retrieval errors for LMA than the 1996 study did of LDAR retrieval errors. Finally, we note that the new algorithm can be applied to LDAR, and essentially any other multi-station TWA network that depends on direct line-of-site antenna excitation.

  16. Quantum connectivity optimization algorithms for entanglement source deployment in a quantum multi-hop network

    NASA Astrophysics Data System (ADS)

    Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen

    2018-04-01

    At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.

  17. Analysis of precipitation data in Bangladesh through hierarchical clustering and multidimensional scaling

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Habibur; Matin, M. A.; Salma, Umma

    2017-12-01

    The precipitation patterns of seventeen locations in Bangladesh from 1961 to 2014 were studied using a cluster analysis and metric multidimensional scaling. In doing so, the current research applies four major hierarchical clustering methods to precipitation in conjunction with different dissimilarity measures and metric multidimensional scaling. A variety of clustering algorithms were used to provide multiple clustering dendrograms for a mixture of distance measures. The dendrogram of pre-monsoon rainfall for the seventeen locations formed five clusters. The pre-monsoon precipitation data for the areas of Srimangal and Sylhet were located in two clusters across the combination of five dissimilarity measures and four hierarchical clustering algorithms. The single linkage algorithm with Euclidian and Manhattan distances, the average linkage algorithm with the Minkowski distance, and Ward's linkage algorithm provided similar results with regard to monsoon precipitation. The results of the post-monsoon and winter precipitation data are shown in different types of dendrograms with disparate combinations of sub-clusters. The schematic geometrical representations of the precipitation data using metric multidimensional scaling showed that the post-monsoon rainfall of Cox's Bazar was located far from those of the other locations. The results of a box-and-whisker plot, different clustering techniques, and metric multidimensional scaling indicated that the precipitation behaviour of Srimangal and Sylhet during the pre-monsoon season, Cox's Bazar and Sylhet during the monsoon season, Maijdi Court and Cox's Bazar during the post-monsoon season, and Cox's Bazar and Khulna during the winter differed from those at other locations in Bangladesh.

  18. Advanced Source Deconvolution Methods for Compton Telescopes

    NASA Astrophysics Data System (ADS)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.

  19. Passive acoustic localization of the Atlantic bottlenose dolphin using whistles and echolocation clicks.

    PubMed

    Freitag, L E; Tyack, P L

    1993-04-01

    A method for localization and tracking of calling marine mammals was tested under realistic field conditions that include noise, multipath, and arbitrarily located sensors. Experiments were performed in two locations using four and six hydrophones with captive Atlantic bottlenose dolphins (Tursiops truncatus). Acoustic signals from the animals were collected in the field using a digital acoustic data acquisition system. The data were then processed off-line to determine relative hydrophone positions and the animal locations. Accurate hydrophone position estimates are achieved by pinging sequentially from each hydrophone to all the others. A two-step least-squares algorithm is then used to determine sensor locations from the calibration data. Animal locations are determined by estimating the time differences of arrival of the dolphin signals at the different sensors. The peak of a matched filter output or the first cycle of the observed waveform is used to determine arrival time of an echolocation click. Cross correlation between hydrophones is used to determine inter-sensor time delays of whistles. Calculation of source location using the time difference of arrival measurements is done using a least-squares solution to minimize error. These preliminary experimental results based on a small set of data show that realistic trajectories for moving animals may be generated from consecutive location estimates.

  20. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  1. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  2. Localization of marine mammals near Hawaii using an acoustic propagation model

    NASA Astrophysics Data System (ADS)

    Tiemann, Christopher O.; Porter, Michael B.; Frazer, L. Neil

    2004-06-01

    Humpback whale songs were recorded on six widely spaced receivers of the Pacific Missile Range Facility (PMRF) hydrophone network near Hawaii during March of 2001. These recordings were used to test a new approach to localizing the whales that exploits the time-difference of arrival (time lag) of their calls as measured between receiver pairs in the PMRF network. The usual technique for estimating source position uses the intersection of hyperbolic curves of constant time lag, but a drawback of this approach is its assumption of a constant wave speed and straight-line propagation to associate acoustic travel time with range. In contrast to hyperbolic fixing, the algorithm described here uses an acoustic propagation model to account for waveguide and multipath effects when estimating travel time from hypothesized source positions. A comparison between predicted and measured time lags forms an ambiguity surface, or visual representation of the most probable whale position in a horizontal plane around the array. This is an important benefit because it allows for automated peak extraction to provide a location estimate. Examples of whale localizations using real and simulated data in algorithms of increasing complexity are provided.

  3. Focal mechanism determination for induced seismicity using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong

    2018-06-01

    Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.

  4. Collaborative human-machine analysis to disambiguate entities in unstructured text and structured datasets

    NASA Astrophysics Data System (ADS)

    Davenport, Jack H.

    2016-05-01

    Intelligence analysts demand rapid information fusion capabilities to develop and maintain accurate situational awareness and understanding of dynamic enemy threats in asymmetric military operations. The ability to extract relationships between people, groups, and locations from a variety of text datasets is critical to proactive decision making. The derived network of entities must be automatically created and presented to analysts to assist in decision making. DECISIVE ANALYTICS Corporation (DAC) provides capabilities to automatically extract entities, relationships between entities, semantic concepts about entities, and network models of entities from text and multi-source datasets. DAC's Natural Language Processing (NLP) Entity Analytics model entities as complex systems of attributes and interrelationships which are extracted from unstructured text via NLP algorithms. The extracted entities are automatically disambiguated via machine learning algorithms, and resolution recommendations are presented to the analyst for validation; the analyst's expertise is leveraged in this hybrid human/computer collaborative model. Military capability is enhanced by these NLP Entity Analytics because analysts can now create/update an entity profile with intelligence automatically extracted from unstructured text, thereby fusing entity knowledge from structured and unstructured data sources. Operational and sustainment costs are reduced since analysts do not have to manually tag and resolve entities.

  5. Multi-Agent Patrolling under Uncertainty and Threats.

    PubMed

    Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D

    2015-01-01

    We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.

  6. Development of an FBG Sensor Array for Multi-Impact Source Localization on CFRP Structures.

    PubMed

    Jiang, Mingshun; Sai, Yaozhang; Geng, Xiangyi; Sui, Qingmei; Liu, Xiaohui; Jia, Lei

    2016-10-24

    We proposed and studied an impact detection system based on a fiber Bragg grating (FBG) sensor array and multiple signal classification (MUSIC) algorithm to determine the location and the number of low velocity impacts on a carbon fiber-reinforced polymer (CFRP) plate. A FBG linear array, consisting of seven FBG sensors, was used for detecting the ultrasonic signals from impacts. The edge-filter method was employed for signal demodulation. Shannon wavelet transform was used to extract narrow band signals from the impacts. The Gerschgorin disc theorem was used for estimating the number of impacts. We used the MUSIC algorithm to obtain the coordinates of multi-impacts. The impact detection system was tested on a 500 mm × 500 mm × 1.5 mm CFRP plate. The results show that the maximum error and average error of the multi-impacts' localization are 9.2 mm and 7.4 mm, respectively.

  7. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  8. Anthropogenic Sulphur Dioxide Load over China as Observed from Different Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Koukouli, M. E.; Balis, D. S.; Johannes Van Der A, Ronald; Theys, N.; Hedelt, P.; Richter, A.; Krotkov, N.; Li, Can; Taylor, M.

    2016-01-01

    China, with its rapid economic growth and immense exporting power, has been the focus of many studies during this previous decade quantifying its increasing emissions contribution to the Earth's atmosphere. With a population slowly shifting towards enlarged power and purchasing needs, the ceaseless inauguration of new power plants, smelters, refineries and industrial parks leads infallibly to increases in sulphur dioxide, SO2, emissions. The recent capability of next generation algorithms as well as new space-borne instruments to detect anthropogenic SO2 loads has enabled a fast advancement in this field. In the following work, algorithms providing total SO2 columns over China based on SCIAMACHY/Envisat, OMI/Aura and GOME2/MetopA observations are presented. The need for post-processing and gridding of the SO2 fields is further revealed in this work, following the path of previous publications. Further, it is demonstrated that the usage of appropriate statistical tools permits studying parts of the datasets typically excluded, such as the winter months loads. Focusing on actual point sources, such as megacities and known power plant locations, instead of entire provinces, monthly mean time series have been examined in detail. The sharp decline in SO2 emissions in more than 90% - 95% of the locations studied confirms the recent implementation of government desulphurisation legislation; however, locations with increases, even for the previous five years, are also identified. These belong to provinces with emerging economies which are in haste to install power plants and are possibly viewed leniently by the authorities, in favour of growth. The SO2 load seasonality has also been examined in detail with a novel mathematical tool, with 70% of the point sources having a statistically significant annual cycle with highs in winter and lows in summer, following the heating requirements of the Chinese population.

  9. Anthropogenic sulphur dioxide load over China as observed from different satellite sensors

    NASA Astrophysics Data System (ADS)

    Koukouli, M. E.; Balis, D. S.; van der A, Ronald Johannes; Theys, N.; Hedelt, P.; Richter, A.; Krotkov, N.; Li, C.; Taylor, M.

    2016-11-01

    China, with its rapid economic growth and immense exporting power, has been the focus of many studies during this previous decade quantifying its increasing emissions contribution to the Earth's atmosphere. With a population slowly shifting towards enlarged power and purchasing needs, the ceaseless inauguration of new power plants, smelters, refineries and industrial parks leads infallibly to increases in sulphur dioxide, SO2, emissions. The recent capability of next generation algorithms as well as new space-borne instruments to detect anthropogenic SO2 loads has enabled a fast advancement in this field. In the following work, algorithms providing total SO2 columns over China based on SCIAMACHY/Envisat, OMI/Aura and GOME2/MetopA observations are presented. The need for post-processing and gridding of the SO2 fields is further revealed in this work, following the path of previous publications. Further, it is demonstrated that the usage of appropriate statistical tools permits studying parts of the datasets typically excluded, such as the winter months loads. Focusing on actual point sources, such as megacities and known power plant locations, instead of entire provinces, monthly mean time series have been examined in detail. The sharp decline in SO2 emissions in more than 90%-95% of the locations studied confirms the recent implementation of government desulphurisation legislation; however, locations with increases, even for the previous five years, are also identified. These belong to provinces with emerging economies which are in haste to install power plants and are possibly viewed leniently by the authorities, in favour of growth. The SO2 load seasonality has also been examined in detail with a novel mathematical tool, with 70% of the point sources having a statistically significant annual cycle with highs in winter and lows in summer, following the heating requirements of the Chinese population.

  10. Can the BMS Algorithm Decode Up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor Errors? Yes, but with Some Additional Remarks

    NASA Astrophysics Data System (ADS)

    Sakata, Shojiro; Fujisawa, Masaya

    It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.

  11. Detection and characterization of cultural noise sources in magnetotelluric data: individual and joint analysis of the polarization attributes of the electric and magnetic field time-series in the time-frequency domain

    NASA Astrophysics Data System (ADS)

    Escalas, M.; Queralt, P.; Ledo, J.; Marcuello, A.

    2012-04-01

    Magnetotelluric (MT) method is a passive electromagnetic technique, which is currently used to characterize sites for the geological storage of CO2. These later ones are usually located nearby industrialized, urban or farming areas, where man-made electromagnetic (EM) signals contaminate the MT data. The identification and characterization of the artificial EM sources which generate the so-called "cultural noise" is an important challenge to obtain the most reliable results with the MT method. The polarization attributes of an EM signal (tilt angle, ellipticity and phase difference between its orthogonal components) are related to the character of its source. In a previous work (Escalas et al. 2011), we proposed a method to distinguish natural signal from cultural noise in the raw MT data. It is based on the polarization analysis of the MT time-series in the time-frequency domain, using a wavelet scheme. We developed an algorithm to implement the method, and was tested with both synthetic and field data. In 2010, we carried out a controlled-source electromagnetic (CSEM) experiment in the Hontomín site (the Research Laboratory on Geological Storage of CO2 in Spain). MT time-series were contaminated at different frequencies with the signal emitted by a controlled artificial EM source: two electric dipoles (1 km long, arranged in North-South and East-West directions). The analysis with our algorithm of the electric field time-series acquired in this experiment was successful: the polarization attributes of both the natural and artificial signal were obtained in the time-frequency domain, highlighting their differences. The processing of the magnetic field time-series acquired in the Hontomín experiment has been done in the present work. This new analysis of the polarization attributes of the magnetic field data has provided additional information to detect the contribution of the artificial source in the measured data. Moreover, the joint analysis of the polarization attributes of the electric and magnetic field has been crucial to fully characterize the properties and the location of the noise source. Escalas, M., Queralt, P., Ledo, J., Marcuello, A., 2011. Identification of cultural noise sources in magnetotelluric data: estimating polarization attributes in the time-frequency domain using wavelet analysis. Geophysical Research Abstracts Vol. 13, EGU2011-6085. EGU General Assembly 2011.

  12. Fast 3D elastic micro-seismic source location using new GPU features

    NASA Astrophysics Data System (ADS)

    Xue, Qingfeng; Wang, Yibo; Chang, Xu

    2016-12-01

    In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.

  13. A Two-Phase Time Synchronization-Free Localization Algorithm for Underwater Sensor Networks.

    PubMed

    Luo, Junhai; Fan, Liying

    2017-03-30

    Underwater Sensor Networks (UWSNs) can enable a broad range of applications such as resource monitoring, disaster prevention, and navigation-assistance. Sensor nodes location in UWSNs is an especially relevant topic. Global Positioning System (GPS) information is not suitable for use in UWSNs because of the underwater propagation problems. Hence, some localization algorithms based on the precise time synchronization between sensor nodes that have been proposed for UWSNs are not feasible. In this paper, we propose a localization algorithm called Two-Phase Time Synchronization-Free Localization Algorithm (TP-TSFLA). TP-TSFLA contains two phases, namely, range-based estimation phase and range-free evaluation phase. In the first phase, we address a time synchronization-free localization scheme based on the Particle Swarm Optimization (PSO) algorithm to obtain the coordinates of the unknown sensor nodes. In the second phase, we propose a Circle-based Range-Free Localization Algorithm (CRFLA) to locate the unlocalized sensor nodes which cannot obtain the location information through the first phase. In the second phase, sensor nodes which are localized in the first phase act as the new anchor nodes to help realize localization. Hence, in this algorithm, we use a small number of mobile beacons to help obtain the location information without any other anchor nodes. Besides, to improve the precision of the range-free method, an extension of CRFLA achieved by designing a coordinate adjustment scheme is updated. The simulation results show that TP-TSFLA can achieve a relative high localization ratio without time synchronization.

  14. A Two-Phase Time Synchronization-Free Localization Algorithm for Underwater Sensor Networks

    PubMed Central

    Luo, Junhai; Fan, Liying

    2017-01-01

    Underwater Sensor Networks (UWSNs) can enable a broad range of applications such as resource monitoring, disaster prevention, and navigation-assistance. Sensor nodes location in UWSNs is an especially relevant topic. Global Positioning System (GPS) information is not suitable for use in UWSNs because of the underwater propagation problems. Hence, some localization algorithms based on the precise time synchronization between sensor nodes that have been proposed for UWSNs are not feasible. In this paper, we propose a localization algorithm called Two-Phase Time Synchronization-Free Localization Algorithm (TP-TSFLA). TP-TSFLA contains two phases, namely, range-based estimation phase and range-free evaluation phase. In the first phase, we address a time synchronization-free localization scheme based on the Particle Swarm Optimization (PSO) algorithm to obtain the coordinates of the unknown sensor nodes. In the second phase, we propose a Circle-based Range-Free Localization Algorithm (CRFLA) to locate the unlocalized sensor nodes which cannot obtain the location information through the first phase. In the second phase, sensor nodes which are localized in the first phase act as the new anchor nodes to help realize localization. Hence, in this algorithm, we use a small number of mobile beacons to help obtain the location information without any other anchor nodes. Besides, to improve the precision of the range-free method, an extension of CRFLA achieved by designing a coordinate adjustment scheme is updated. The simulation results show that TP-TSFLA can achieve a relative high localization ratio without time synchronization. PMID:28358342

  15. Improved earthquake monitoring in the central and eastern United States in support of seismic assessments for critical facilities

    USGS Publications Warehouse

    Leith, William S.; Benz, Harley M.; Herrmann, Robert B.

    2011-01-01

    Evaluation of seismic monitoring capabilities in the central and eastern United States for critical facilities - including nuclear powerplants - focused on specific improvements to understand better the seismic hazards in the region. The report is not an assessment of seismic safety at nuclear plants. To accomplish the evaluation and to provide suggestions for improvements using funding from the American Recovery and Reinvestment Act of 2009, the U.S. Geological Survey examined addition of new strong-motion seismic stations in areas of seismic activity and addition of new seismic stations near nuclear power-plant locations, along with integration of data from the Transportable Array of some 400 mobile seismic stations. Some 38 and 68 stations, respectively, were suggested for addition in active seismic zones and near-power-plant locations. Expansion of databases for strong-motion and other earthquake source-characterization data also was evaluated. Recognizing pragmatic limitations of station deployment, augmentation of existing deployments provides improvements in source characterization by quantification of near-source attenuation in regions where larger earthquakes are expected. That augmentation also supports systematic data collection from existing networks. The report further utilizes the application of modeling procedures and processing algorithms, with the additional stations and the improved seismic databases, to leverage the capabilities of existing and expanded seismic arrays.

  16. Comparing Different Fault Identification Algorithms in Distributed Power System

    NASA Astrophysics Data System (ADS)

    Alkaabi, Salim

    A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.

  17. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  18. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  19. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  20. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  1. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  2. dLocAuth: a dynamic multifactor authentication scheme for mCommerce applications using independent location-based obfuscation

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.

    2012-06-01

    This paper proposes a new technique to obfuscate an authentication-challenge program (named LocProg) using randomly generated data together with a client's current location in real-time. LocProg can be used to enable any handsetapplication on mobile-devices (e.g. mCommerce on Smartphones) that requires authentication with a remote authenticator (e.g. bank). The motivation of this novel technique is to a) enhance the security against replay attacks, which is currently based on using real-time nonce(s), and b) add a new security factor, which is location verified by two independent sources, to challenge / response methods for authentication. To assure a secure-live transaction, thus reducing the possibility of replay and other remote attacks, the authors have devised a novel technique to obtain the client's location from two independent sources of GPS on the client's side and the cellular network on authenticator's side. The algorithm of LocProg is based on obfuscating "random elements plus a client's data" with a location-based key, generated on the bank side. LocProg is then sent to the client and is designed so it will automatically integrate into the target application on the client's handset. The client can then de-obfuscate LocProg if s/he is within a certain range around the location calculated by the bank and if the correct personal data is supplied. LocProg also has features to protect against trial/error attacks. Analysis of LocAuth's security (trust, threat and system models) and trials based on a prototype implementation (on Android platform) prove the viability and novelty of LocAuth.

  3. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  4. Reduced-Order Modeling and Wavelet Analysis of Turbofan Engine Structural Response Due to Foreign Object Damage (FOD) Events

    NASA Technical Reports Server (NTRS)

    Turso, James; Lawrence, Charles; Litt, Jonathan

    2004-01-01

    The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.

  5. Reduced-Order Modeling and Wavelet Analysis of Turbofan Engine Structural Response Due to Foreign Object Damage "FOD" Events

    NASA Technical Reports Server (NTRS)

    Turso, James A.; Lawrence, Charles; Litt, Jonathan S.

    2007-01-01

    The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.

  6. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  7. Monitoring microearthquakes with the San Andreas fault observatory at depth

    USGS Publications Warehouse

    Oye, V.; Ellsworth, W.L.

    2007-01-01

    In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.

  8. Effect of anisoplanatism on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Woeger, Friedrich; Rimmele, Thomas

    2009-10-01

    We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.

  9. Correlation approach to identify coding regions in DNA sequences

    NASA Technical Reports Server (NTRS)

    Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1994-01-01

    Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.

  10. Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, M. Y.

    1986-01-01

    This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.

  11. Pricing and location decisions in multi-objective facility location problem with M/M/m/k queuing systems

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin

    2017-01-01

    This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.

  12. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  13. A micro-Doppler sonar for acoustic surveillance in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaonian

    Wireless sensor networks have been employed in a wide variety of applications, despite the limited energy and communication resources at each sensor node. Low power custom VLSI chips implementing passive acoustic sensing algorithms have been successfully integrated into an acoustic surveillance unit and demonstrated for detection and location of sound sources. In this dissertation, I explore active and passive acoustic sensing techniques, signal processing and classification algorithms for detection and classification in a multinodal sensor network environment. I will present the design and characterization of a continuous-wave micro-Doppler sonar to image objects with articulated moving components. As an example application for this system, we use it to image gaits of humans and four-legged animals. I will present the micro-Doppler gait signatures of a walking person, a dog and a horse. I will discuss the resolution and range of this micro-Doppler sonar and use experimental results to support the theoretical analyses. In order to reduce the data rate and make the system amenable to wireless sensor networks, I will present a second micro-Doppler sonar that uses bandpass sampling for data acquisition. Speech recognition algorithms are explored for biometric identifications from one's gait, and I will present and compare the classification performance of the two systems. The acoustic micro-Doppler sonar design and biometric identification results are the first in the field as the previous work used either video camera or microwave technology. I will also review bearing estimation algorithms and present results of applying these algorithms for bearing estimation and tracking of moving vehicles. Another major source of the power consumption at each sensor node is the wireless interface. To address the need of low power communications in a wireless sensor network, I will also discuss the design and implementation of ultra wideband transmitters in a three dimensional silicon on insulator process. Lastly, a prototype of neuromorphic interconnects using ultra wideband radio will be presented.

  14. Long Period (LP) volcanic earthquake source location at Merapi volcano by using dense array technics

    NASA Astrophysics Data System (ADS)

    Metaxian, Jean Philippe; Budi Santoso, Agus; Laurin, Antoine; Subandriyo, Subandriyo; Widyoyudo, Wiku; Arshab, Ghofar

    2015-04-01

    Since 2010, Merapi shows unusual activity compared to last decades. Powerful phreatic explosions are observed; some of them are preceded by LP signals. In the literature, LP seismicity is thought to be originated within the fluid, and therefore to be representative of the pressurization state of the volcano plumbing system. Another model suggests that LP events are caused by slow, quasi-brittle, low stress-drop failure driven by transient upper-edifice deformations. Knowledge of the spatial distribution of LP events is fundamental for better understanding the physical processes occurring in the conduit, as well as for the monitoring and the improvement of eruption forecasting. LP events recorded at Merapi have a spectral content dominated by frequencies between 0.8 and 3 Hz. To locate the source of these events, we installed a seismic antenna composed of 4 broadband CMG-6TD Güralp stations. This network has an aperture of 300 m. It is located on the site of Pasarbubar, between 500 and 800 m from the crater rim. Two multi-parameter stations (seismic, tiltmeter, S-P) located in the same area, equipped with broadband CMG-40T Güralp sensors may also be used to complete the data of the antenna. The source of LP events is located by using different approaches. In the first one, we used a method based on the measurement of the time delays between the early beginnings of LP events for each array receiver. The observed differences of time delays obtained for each pair of receivers are compared to theoretical values calculated from the travel times computed between grid nodes, which are positioned in the structure, and each receiver. In a second approach, we estimate the slowness vector by using MUSIC algorithm applied to 3-components data. From the slowness vector, we deduce the back-azimuth and the incident angle, which give an estimation of LP source depth in the conduit. This work is part of the Domerapi project funded by French Agence Nationale de la Recherche (https://sites.google.com/site/domerapi2).

  15. Scintillator-based transverse proton beam profiler for laser-plasma ion sources.

    PubMed

    Dover, N P; Nishiuchi, M; Sakaki, H; Alkhimova, M A; Faenov, A Ya; Fukuda, Y; Kiriyama, H; Kon, A; Kondo, K; Nishitani, K; Ogura, K; Pikuz, T A; Pirozhkov, A S; Sagisaka, A; Kando, M; Kondo, K

    2017-07-01

    A high repetition rate scintillator-based transverse beam profile diagnostic for laser-plasma accelerated proton beams has been designed and commissioned. The proton beam profiler uses differential filtering to provide coarse energy resolution and a flexible design to allow optimisation for expected beam energy range and trade-off between spatial and energy resolution depending on the application. A plastic scintillator detector, imaged with a standard 12-bit scientific camera, allows data to be taken at a high repetition rate. An algorithm encompassing the scintillator non-linearity is described to estimate the proton spectrum at different spatial locations.

  16. Predicting Gene Structures from Multiple RT-PCR Tests

    NASA Astrophysics Data System (ADS)

    Kováč, Jakub; Vinař, Tomáš; Brejová, Broňa

    It has been demonstrated that the use of additional information such as ESTs and protein homology can significantly improve accuracy of gene prediction. However, many sources of external information are still being omitted from consideration. Here, we investigate the use of product lengths from RT-PCR experiments in gene finding. We present hardness results and practical algorithms for several variants of the problem and apply our methods to a real RT-PCR data set in the Drosophila genome. We conclude that the use of RT-PCR data can improve the sensitivity of gene prediction and locate novel splicing variants.

  17. Coupled optics reconstruction from TBT data using MAD-X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexahin, Y.; Gianfelice-Wendt, E.; /Fermilab

    2007-06-01

    Turn-by-turn BPM data provide immediate information on the coupled optics functions at BPM locations. In the case of small deviations from the known (design) uncoupled optics some cognizance of the sources of perturbation, BPM calibration errors and tilts can also be inferred without detailed lattice modeling. In practical situations, however, fitting the lattice model with the help of some optics code would lead to more reliable results. We present an algorithm for coupled optics reconstruction from TBT data on the basis of MAD-X and give examples of its application for the Fermilab Tevatron accelerator.

  18. Innovative signal processing for Johnson Noise thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N. Dianne Bull; Britton, Jr, Charles L.; Roberts, Michael

    This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.

  19. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.

    1996-08-01

    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  20. Site Selection and Resource Allocation of Oil Spill Emergency Base for Offshore Oil Facilities

    NASA Astrophysics Data System (ADS)

    Li, Yunbin; Liu, Jingxian; Wei, Lei; Wu, Weihuang

    2018-02-01

    Based on the analysis of the historical data about oil spill accidents in the Bohai Sea, this paper discretizes oil spilled source into a limited number of spill points. According to the probability of oil spill risk, the demand for salvage forces at each oil spill point is evaluated. Aiming at the specific location of the rescue base around the Bohai Sea, a cost-benefit analysis is conducted to determine the total cost of disasters for each rescue base. Based on the relationship between the oil spill point and the rescue site, a multi-objective optimization location model for the oil spill rescue base in the Bohai Sea region is established. And the genetic algorithm is used to solve the optimization problem, and determine the emergency rescue base optimization program and emergency resources allocation ratio.

  1. From Morphology to Neural Information: The Electric Sense of the Skate

    PubMed Central

    Camperi, Marcelo; Tricas, Timothy C; Brown, Brandon R

    2007-01-01

    Morphology typically enhances the fidelity of sensory systems. Sharks, skates, and rays have a well-developed electrosense that presents strikingly unique morphologies. Here, we model the dynamics of the peripheral electrosensory system of the skate, a dorsally flattened batoid, moving near an electric dipole source (e.g., a prey organism). We compute the coincident electric signals that develop across an array of the skate's electrosensors, using electrodynamics married to precise morphological measurements of sensor location, infrastructure, and vector projection. Our results demonstrate that skate morphology enhances electrosensory information. Not only could the skate locate prey using a simple population vector algorithm, but its morphology also specifically leads to quick shifts in firing rates that are well-suited to the demonstrated bandwidth of the electrosensory system. Finally, we propose electrophysiology trials to test the modeling scheme. PMID:17571918

  2. TDRS orbit determination by radio interferometry

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utama, Muhammad Reza July, E-mail: muhammad.reza@bmkg.go.id; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian

    The precise hypocenter was determined location using double difference method around subduction zone in Moluccas area eastern part of Indonesia. The initial hypocenter location from MCGA data catalogue of 1,945 earthquake events. Basically the principle of double-difference algorithm assumes if the distance between two earthquake hypocenter distribution is very small compared to the distance between the station to the earthquake source, the ray path can be considered close to both earthquakes. The results show the initial earthquakes with a certain depth (fix depth 10 km) relocated and can be interpreted more reliable in term of seismicity and geological setting. Themore » relocation of the intra slab earthquakes beneath Banda Arc are also clearly observed down to depth of about 400 km. The precise relocated hypocenter will give invaluable seismicity information for other seismological and tectonic studies especially for seismic hazard analysis in this region.« less

  4. Spatial-temporal travel pattern mining using massive taxi trajectory data

    NASA Astrophysics Data System (ADS)

    Zheng, Linjiang; Xia, Dong; Zhao, Xin; Tan, Longyou; Li, Hang; Chen, Li; Liu, Weining

    2018-07-01

    Deep understanding of residents' travel patterns would provide helpful insights into the mechanisms of many socioeconomic phenomena. With the rapid development of location-aware computing technologies, researchers have easy access to large quantities of travel data. As an important data source, taxi trajectory data are featured by their high quality, good continuity and wide distribution, making it suitable for travel pattern mining. In this paper, we use taxi trajectory data to study spatial-temporal characterization of urban residents' travel patterns from two aspects: attractive areas and hot paths. Firstly, a framework of trajectory preprocessing, including data cleaning and extracting the taxi passenger pick-up/drop-off points, is presented to reduce the noise and redundancy in raw trajectory data. Then, a grid density based clustering algorithm is proposed to discover travel attractive areas in different periods of a day. On this basis, we put forward a spatial-temporal trajectory clustering method to discover hot paths among travel attractive areas. Compared with previous algorithms, which only consider the spatial constraint between trajectories, temporal constraint is also considered in our method. Through the experiments, we discuss how to determine the optimal parameters of the two clustering algorithms and verify the effectiveness of the algorithms using real data. Furthermore, we analyze spatial-temporal characterization of Chongqing residents' travel pattern.

  5. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  6. Automated linking of suspicious findings between automated 3D breast ultrasound volumes

    NASA Astrophysics Data System (ADS)

    Gubern-Mérida, Albert; Tan, Tao; van Zelst, Jan; Mann, Ritse M.; Karssemeijer, Nico

    2016-03-01

    Automated breast ultrasound (ABUS) is a 3D imaging technique which is rapidly emerging as a safe and relatively inexpensive modality for screening of women with dense breasts. However, reading ABUS examinations is very time consuming task since radiologists need to manually identify suspicious findings in all the different ABUS volumes available for each patient. Image analysis techniques to automatically link findings across volumes are required to speed up clinical workflow and make ABUS screening more efficient. In this study, we propose an automated system to, given the location in the ABUS volume being inspected (source), find the corresponding location in a target volume. The target volume can be a different view of the same study or the same view from a prior examination. The algorithm was evaluated using 118 linkages between suspicious abnormalities annotated in a dataset of ABUS images of 27 patients participating in a high risk screening program. The distance between the predicted location and the center of the annotated lesion in the target volume was computed for evaluation. The mean ± stdev and median distance error achieved by the presented algorithm for linkages between volumes of the same study was 7.75±6.71 mm and 5.16 mm, respectively. The performance was 9.54±7.87 and 8.00 mm (mean ± stdev and median) for linkages between volumes from current and prior examinations. The proposed approach has the potential to minimize user interaction for finding correspondences among ABUS volumes.

  7. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  8. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  9. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  10. Location-assured, multifactor authentication on smartphones via LTE communication

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.

  11. Real-Time Joint Streaming Data Processing from Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.

    2014-12-01

    The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction with the development of innovative computing algorithms, when combined with sensor data can provide a new paradigm for real-time earthquake detection in order to facilitate rapid and inexpensive natural risk reduction.

  12. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  13. Using Bluetooth proximity sensing to determine where office workers spend time at work.

    PubMed

    Clark, Bronwyn K; Winkler, Elisabeth A; Brakenridge, Charlotte L; Trost, Stewart G; Healy, Genevieve N

    2018-01-01

    Most wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms. For one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores. When considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions). The Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted.

  14. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  15. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  16. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  17. Fault gouge evolution during rupture and healing: Continual active-seismic observations across laboratory-scale fault zones

    NASA Astrophysics Data System (ADS)

    Krysta, M.; Kusmierczyk-Michulec, J.; Nikkinen, M.; Carter, J. A.

    2011-12-01

    In order to support its mission of monitoring compliance with the treaty banning nuclear explosions, the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) operates four global networks of, respectively, seismic, infrasound, hydroacoustic sensors and air samplers accompanied with radionuclide detectors. The role of the International Data Centre (IDC) of CTBTO is to associate the signals detected in the monitoring networks with the physical phenomena which emitted these signals, by forming events. One of the aspects of associating detections with emitters is the problem of inferring the sources of radionuclides from the detections made at CTBTO radionuclide network stations. This task is particularly challenging because the average transport distance between a release point and detectors is large. Complex processes of turbulent diffusion are responsible for efficient mixing and consequently for decreasing the information content of detections with an increasing distance from the source. The problem is generally addressed in a two-step process. In the first step, an atmospheric transport model establishes a link between the detections and the regions of possible source location. In the second step this link is inverted to infer source information from the detections. In this presentation, we will discuss enhancements of the presently used regression-based inversion algorithm to reconstruct a source of radionuclides. To this aim, modern inversion algorithms accounting for prior information and appropriately regularizing an under-determined reconstruction problem will be briefly introduced. Emphasis will be on the CTBTO context and the choice of inversion methods. An illustration of the first tests will be provided using a framework of twin experiments, i.e. fictitious detections in the CTBTO radionuclide network generated with an atmospheric transport model.

  18. A low-complexity geometric bilateration method for localization in Wireless Sensor Networks and its comparison with Least-Squares methods.

    PubMed

    Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo

    2012-01-01

    This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.

  19. Comparison and optimization of machine learning methods for automated classification of circulating tumor cells.

    PubMed

    Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J

    2016-10-01

    Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  20. Structures vibration control via Tuned Mass Dampers using a co-evolution Coral Reefs Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.

    2017-04-01

    In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.

  1. HI-bearing Ultra Diffuse Galaxies in the ALFALFA Survey

    NASA Astrophysics Data System (ADS)

    Leisman, Lukas; Janowiecki, Steven; Jones, Michael G.; ALFALFA Almost Darks Team

    2018-01-01

    The Arecibo Legacy Fast ALFA (Arecibo L-band Feed Array) extragalactic HI survey, with over 30,000 high significance extragalactic sources, is well positioned to locate gas-bearing, low surface brightness sources missed by optical detection algorithms. We investigate the nature of a population of HI-bearing sources in ALFALFA with properties similar to "ultra-diffuse" galaxies (UDGs): galaxies with stellar masses of dwarf galaxies, but radii of L* galaxies. These "HI-bearing ultra-diffuse" sources (HUDS) constitute a small, but pertinent, fraction of the dwarf-mass galaxies in ALFALFA. They are bluer and have more irregular morphologies than the optically-selected UDGs found in clusters, and they appear to be gas-rich for their stellar mass, indicating low star formation efficiency. To illuminate potential explanations for the extreme properties of these sources we explore their environments and estimate their halo properties. We conclude that environmental mechanism are unlikely the cause of HUDS' properties, as they exist in environments equivalent to that of the other ALFALFA sources of similar HI-masses, however, we do find some suggestion that these HUDS may reside in high spin parameter halos, a potential explanation for their "ultra-diffuse" nature.

  2. Algorithmic localisation of noise sources in the tip region of a low-speed axial flow fan

    NASA Astrophysics Data System (ADS)

    Tóth, Bence; Vad, János

    2017-04-01

    An objective and algorithmised methodology is proposed to analyse beamform data obtained for axial fans. Its application is demonstrated in a case study regarding the tip region of a low-speed cooling fan. First, beamforming is carried out in a co-rotating frame of reference. Then, a distribution of source strength is extracted along the circumference of the rotor at the blade tip radius in each analysed third-octave band. The circumferential distributions are expanded into Fourier series, which allows for filtering out the effects of perturbations, on the basis of an objective criterion. The remaining Fourier components are then considered as base sources to determine the blade-passage-periodic flow mechanisms responsible for the broadband noise. Based on their frequency and angular location, the base sources are grouped together. This is done using the fuzzy c-means clustering method to allow the overlap of the source mechanisms. The number of clusters is determined in a validity analysis. Finally, the obtained clusters are assigned to source mechanisms based on the literature. Thus, turbulent boundary layer - trailing edge interaction noise, tip leakage flow noise, and double leakage flow noise are identified.

  3. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  4. qPMS9: An Efficient Algorithm for Quorum Planted Motif Search

    NASA Astrophysics Data System (ADS)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2015-01-01

    Discovering patterns in biological sequences is a crucial problem. For example, the identification of patterns in DNA sequences has resulted in the determination of open reading frames, identification of gene promoter elements, intron/exon splicing sites, and SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have led to domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, discovery of short functional motifs, etc. In this paper we focus on the identification of an important class of patterns, namely, motifs. We study the (l, d) motif search problem or Planted Motif Search (PMS). PMS receives as input n strings and two integers l and d. It returns all sequences M of length l that occur in each input string, where each occurrence differs from M in at most d positions. Another formulation is quorum PMS (qPMS), where the motif appears in at least q% of the strings. We introduce qPMS9, a parallel exact qPMS algorithm that offers significant runtime improvements on DNA and protein datasets. qPMS9 solves the challenging DNA (l, d)-instances (28, 12) and (30, 13). The source code is available at https://code.google.com/p/qpms9/.

  5. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  6. Automated detection and cataloging of global explosive volcanism using the International Monitoring System infrasound network

    NASA Astrophysics Data System (ADS)

    Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars

    2017-04-01

    We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.

  7. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  8. An ant colony based algorithm for overlapping community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di

    2015-06-01

    Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.

  9. Addressing Data Analysis Challenges in Gravitational Wave Searches Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Weerathunga, Thilina Shihan

    2017-08-01

    Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.

  10. Optimization methods for locating lightning flashes using magnetic direction finding networks

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.

    1989-01-01

    Techniques for producing best point estimates of target position using direction finder bearing information are reviewed. The use of an algorithm that calculates the cloud-to-ground flash location given multiple bearings is illustrated and the position errors are described. This algorithm can be used to analyze direction finder network performance.

  11. An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu

    In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.

  12. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  13. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  14. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  15. Imbalanced multi-modal multi-label learning for subcellular localization prediction of human proteins with both single and multiple sites.

    PubMed

    He, Jianjun; Gu, Hong; Liu, Wenqi

    2012-01-01

    It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.

  16. Fast localization of optic disc and fovea in retinal images for eye disease screening

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Echegaray, S.; Pattichis, M.; Zamora, G.; Bauman, W.; Soliz, P.

    2011-03-01

    Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.

  17. Experimental demonstration of passive acoustic imaging in the human skull cavity using CT-based aberration corrections.

    PubMed

    Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo

    2015-07-01

    Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981-5005 (2013)]. A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11-0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors' previous experimental measurements using source-based skull corrections O'Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285-1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood-brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography.

  18. Experimental demonstration of passive acoustic imaging in the human skull cavity using CT-based aberration corrections

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2015-01-01

    Purpose: Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981–5005 (2013)]. Methods: A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11–0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. Results: For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors’ previous experimental measurements using source-based skull corrections O’Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285–1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood–brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Conclusions: Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography. PMID:26133635

  19. Three-dimensional multi bioluminescent sources reconstruction based on adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong

    2011-03-01

    Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.

  20. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

Top