Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong
2016-06-06
We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
Improving the local wavenumber method by automatic DEXP transformation
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni
2014-12-01
In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.
[EEG source localization using LORETA (low resolution electromagnetic tomography)].
Puskás, Szilvia
2011-03-30
Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.
Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun
2014-06-27
This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.
NASA Astrophysics Data System (ADS)
Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo
2018-05-01
An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
NASA Astrophysics Data System (ADS)
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
NASA Astrophysics Data System (ADS)
Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang
2007-03-01
The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.
Chandra, Rohit; Balasingham, Ilangko
2015-01-01
A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
Xue, Bing; Qu, Xiaodong; Fang, Guangyou; Ji, Yicai
2017-01-01
In this paper, the methods and analysis for estimating the location of a three-dimensional (3-D) single source buried in lossy medium are presented with uniform circular array (UCA). The mathematical model of the signal in the lossy medium is proposed. Using information in the covariance matrix obtained by the sensors’ outputs, equations of the source location (azimuth angle, elevation angle, and range) are obtained. Then, the phase and amplitude of the covariance matrix function are used to process the source localization in the lossy medium. By analyzing the characteristics of the proposed methods and the multiple signal classification (MUSIC) method, the computational complexity and the valid scope of these methods are given. From the results, whether the loss is known or not, we can choose the best method for processing the issues (localization in lossless medium or lossy medium). PMID:28574467
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
Lie, Octavian V; Papanastassiou, Alexander M; Cavazos, José E; Szabó, Ákos C
2015-10-01
Poor seizure outcomes after epilepsy surgery often reflect an incorrect localization of the epileptic sources by standard intracranial EEG interpretation because of limited electrode coverage of the epileptogenic zone. This study investigates whether, in such conditions, source modeling is able to provide more accurate source localization than the standard clinical method that can be used prospectively to improve surgical resection planning. Suboptimal epileptogenic zone sampling is simulated by subsets of the electrode configuration used to record intracranial EEG in a patient rendered seizure free after surgery. sLORETA and the clinical method solutions are applied to interictal spikes sampled with these electrode subsets and are compared for colocalization with the resection volume and displacement due to electrode downsampling. sLORETA provides often congruent and at times more accurate source localization when compared with the standard clinical method. However, with electrode downsampling, individual sLORETA solution locations can vary considerably and shift consistently toward the remaining electrodes. sLORETA application can improve source localization based on the clinical method but does not reliably compensate for suboptimal electrode placement. Incorporating sLORETA solutions based on intracranial EEG in surgical planning should proceed cautiously in cases where electrode repositioning is planned on clinical grounds.
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.
Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh
2018-04-27
Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-05-21
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-01-01
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410
Shariat, M H; Gazor, S; Redfearn, D
2015-08-01
Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
Multiscale Methods for Nuclear Reactor Analysis
NASA Astrophysics Data System (ADS)
Collins, Benjamin S.
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly interface, the fuel/reflector interface, and assemblies where control rods are inserted. The embedded method also allows for multiple solution levels to be applied in a single calculation. The addition of intermediate levels to the solution improves the accuracy of the method. Both multiscale methods considered here have benefits and drawbacks, but both can provide improvements over the current PPR methodology.
Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas
2017-02-01
In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Pilot-aided feedforward data recovery in optical coherent communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing
2017-09-19
A method and a system for pilot-aided feedforward data recovery are provided. The method and system include a receiver including a strong local oscillator operating in a free running mode independent of a signal light source. The phase relation between the signal light source and the local oscillator source is determined based on quadrature measurements on pilot pulses from the signal light source. Using the above phase relation, information encoded in an incoming signal can be recovered, optionally for use in communication with classical coherent communication protocols and quantum communication protocols.
Near-Field Noise Source Localization in the Presence of Interference
NASA Astrophysics Data System (ADS)
Liang, Guolong; Han, Bo
In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.
Code of Federal Regulations, 2011 CFR
2011-07-01
... received from local non-tax sources such as interest, bake sales, gifts, donations, and in-kind... pupil from local interest, bake sales, in-kind contributions, and other non-tax local sources. The... ($700/$700). The local revenue received from interest, bake sales, in-kind contributions and other non...
Code of Federal Regulations, 2014 CFR
2014-07-01
... received from local non-tax sources such as interest, bake sales, gifts, donations, and in-kind... pupil from local interest, bake sales, in-kind contributions, and other non-tax local sources. The... ($700/$700). The local revenue received from interest, bake sales, in-kind contributions and other non...
Code of Federal Regulations, 2013 CFR
2013-07-01
... received from local non-tax sources such as interest, bake sales, gifts, donations, and in-kind... pupil from local interest, bake sales, in-kind contributions, and other non-tax local sources. The... ($700/$700). The local revenue received from interest, bake sales, in-kind contributions and other non...
Code of Federal Regulations, 2012 CFR
2012-07-01
... received from local non-tax sources such as interest, bake sales, gifts, donations, and in-kind... pupil from local interest, bake sales, in-kind contributions, and other non-tax local sources. The... ($700/$700). The local revenue received from interest, bake sales, in-kind contributions and other non...
Code of Federal Regulations, 2010 CFR
2010-07-01
... received from local non-tax sources such as interest, bake sales, gifts, donations, and in-kind... pupil from local interest, bake sales, in-kind contributions, and other non-tax local sources. The... ($700/$700). The local revenue received from interest, bake sales, in-kind contributions and other non...
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Moreau, Frédérique; Mathé, Pierre-Etienne; Hermitte, Daniel; Michel, Jean-Marie
2007-10-01
We have previously developed a method for characterizing and localizing `homogeneous' buried sources, from the measure of potential anomalies at a fixed height above ground (magnetic, electric and gravity). This method is based on potential theory and uses the properties of the Poisson kernel (real by definition) and the continuous wavelet theory. Here, we relax the assumption on sources and introduce a method that we call the `multiscale tomography'. Our approach is based on the harmonic extension of the observed magnetic field to produce a complex source by use of a complex Poisson kernel solution of the Laplace equation for complex potential field. A phase and modulus are defined. We show that the phase provides additional information on the total magnetic inclination and the structure of sources, while the modulus allows us to characterize its spatial location, depth and `effective degree'. This method is compared to the `complex dipolar tomography', extension of the Patella method that we previously developed. We applied both methods and a classical electrical resistivity tomography to detect and localize buried archaeological structures like antique ovens from magnetic measurements on the Fox-Amphoux site (France). The estimates are then compared with the results of excavations.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
Methods of localization of Lamb wave sources on thin plates
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2015-04-01
Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
NASA Astrophysics Data System (ADS)
Gang, Yin; Yingtang, Zhang; Hongbo, Fan; Zhining, Li; Guoquan, Ren
2016-05-01
We have developed a method for automatic detection, localization and classification (DLC) of multiple dipole sources using magnetic gradient tensor data. First, we define modified tilt angles to estimate the approximate horizontal locations of the multiple dipole-like magnetic sources simultaneously and detect the number of magnetic sources using a fixed threshold. Secondly, based on the isotropy of the normalized source strength (NSS) response of a dipole, we obtain accurate horizontal locations of the dipoles. Then the vertical locations are calculated using magnitude magnetic transforms of magnetic gradient tensor data. Finally, we invert for the magnetic moments of the sources using the measured magnetic gradient tensor data and forward model. Synthetic and field data sets demonstrate effectiveness and practicality of the proposed method.
Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane
2016-05-01
Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J
2016-05-01
In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.
Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie
2017-01-01
Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065
Exploring three faint source detections methods for aperture synthesis radio images
NASA Astrophysics Data System (ADS)
Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.
2015-04-01
Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.
Adjoint-tomography for a Local Surface Structure: Methodology and a Blind Test
NASA Astrophysics Data System (ADS)
Kubina, Filip; Michlik, Filip; Moczo, Peter; Kristek, Jozef; Stripajova, Svetlana
2017-04-01
We have developed a multiscale full-waveform adjoint-tomography method for local surface sedimentary structures with complicated interference wavefields. The local surface sedimentary basins and valleys are often responsible for anomalous earthquake ground motions and corresponding damage in earthquakes. In many cases only relatively small number of records of a few local earthquakes is available for a site of interest. Consequently, prediction of earthquake ground motion at the site has to include numerical modeling for a realistic model of the local structure. Though limited, the information about the local structure encoded in the records is important and irreplaceable. It is therefore reasonable to have a method capable of using the limited information in records for improving a model of the local structure. A local surface structure and its interference wavefield require a specific multiscale approach. In order to verify our inversion method, we performed a blind test. We obtained synthetic seismograms at 8 receivers for 2 local sources, complete description of the sources, positions of the receivers and material parameters of the bedrock. We considered the simplest possible starting model - a homogeneous halfspace made of the bedrock. Using our inversion method we obtained an inverted model. Given the starting model, synthetic seismograms simulated for the inverted model are surprisingly close to the synthetic seismograms simulated for the true structure in the target frequency range up to 4.5 Hz. We quantify the level of agreement between the true and inverted seismograms using the L2 and time-frequency misfits, and, more importantly for earthquake-engineering applications, also using the goodness-of-fit criteria based on the earthquake-engineering characteristics of earthquake ground motion. We also verified the inverted model for other source-receiver configurations not used in the inversion.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Localization of source with unknown amplitude using IPMC sensor arrays
NASA Astrophysics Data System (ADS)
Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo
2011-04-01
The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Atlas, Robert (Technical Monitor)
2002-01-01
Precipitation recycling is defined as the amount of water that evaporates from a region that precipitates within the same region. This is also interpreted as the local source of water for precipitation. In this study, the local and remote sources of water for precipitation have been diagnosed through the use of passive constituent tracers that represent regional evaporative sources along with their transport and precipitation. We will discuss the differences between this method and the simpler bulk diagnostic approach to precipitation recycling. A summer seasonal simulation has been analyzed for the regional sources of the United States Great Plains precipitation. While the tropical Atlantic Ocean (including the Gulf of Mexico) and the local continental sources of precipitation are most dominant, the vertically integrated column of water contains substantial water content originating from the Northern Pacific Ocean, which is not precipitated. The vertical profiles of regional water sources indicate that local Great Plains source of water dominates the lower troposphere, predominantly in the PBL. However, the Pacific Ocean source is dominant over a large portion of the middle to upper troposphere. The influence of the tropical Atlantic Ocean is reasonably uniform throughout the column. While the results are not unexpected given the formulation of the model's convective parameterization, the analysis provides a quantitative assessment of the impact of local evaporation on the occurrence of convective precipitation in the GCM. Further, these results suggest that local source of water is not well mixed throughout the vertical column.
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.
Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying
2018-06-13
The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.
Time reversal for localization of sources of infrasound signals in a windy stratified atmosphere.
Lonzaga, Joel B
2016-06-01
Time reversal is used for localizing sources of recorded infrasound signals propagating in a windy, stratified atmosphere. Due to the convective effect of the background flow, the back-azimuths of the recorded signals can be substantially different from the source back-azimuth, posing a significant difficulty in source localization. The back-propagated signals are characterized by negative group velocities from which the source back-azimuth and source-to-receiver (STR) distance can be estimated using the apparent back-azimuths and trace velocities of the signals. The method is applied to several distinct infrasound arrivals recorded by two arrays in the Netherlands. The infrasound signals were generated by the Buncefield oil depot explosion in the U.K. in December 2005. Analyses show that the method can be used to substantially enhance estimates of the source back-azimuth and the STR distance. In one of the arrays, for instance, the deviations between the measured back-azimuths of the signals and the known source back-azimuth are quite large (-1° to -7°), whereas the deviations between the predicted and known source back-azimuths are small with an absolute mean value of <1°. Furthermore, the predicted STR distance is off only by <5% of the known STR distance.
Jun, James Jaeyoon; Longtin, André; Maler, Leonard
2013-01-01
In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
NASA Astrophysics Data System (ADS)
Liu, Kai; Liu, Yuan; Liu, Yu-Rong; En, Yun-Fei; Li, Bin
2017-07-01
Channel mobility in the p-type polycrystalline silicon thin film transistors (poly-Si TFTs) is extracted using Hoffman method, linear region transconductance method and multi-frequency C-V method. Due to the non-negligible errors when neglecting the dependence of gate-source voltage on the effective mobility, the extracted mobility results are overestimated using linear region transconductance method and Hoffman method, especially in the lower gate-source voltage region. By considering of the distribution of localized states in the band-gap, the frequency independent capacitance due to localized charges in the sub-gap states and due to channel free electron charges in the conduction band were extracted using multi-frequency C-V method. Therefore, channel mobility was extracted accurately based on the charge transport theory. In addition, the effect of electrical field dependent mobility degradation was also considered in the higher gate-source voltage region. In the end, the extracted mobility results in the poly-Si TFTs using these three methods are compared and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Berry, M. L..; Grieme, M.
We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.
Waves on Thin Plates: A New (Energy Based) Method on Localization
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2016-04-01
Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.
2006-01-01
Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
Method and system for determining radiation shielding thickness and gamma-ray energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio
2015-12-15
A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Fiber optic distributed temperature sensing for fire source localization
NASA Astrophysics Data System (ADS)
Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong
2017-08-01
A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
NASA Astrophysics Data System (ADS)
Squizzato, Stefania; Masiol, Mauro
2015-10-01
The air quality is influenced by the potential effects of meteorology at meso- and synoptic scales. While local weather and mixing layer dynamics mainly drive the dispersion of sources at small scales, long-range transports affect the movements of air masses over regional, transboundary and even continental scales. Long-range transport may advect polluted air masses from hot-spots by increasing the levels of pollution at nearby or remote locations or may further raise air pollution levels where external air masses originate from other hot-spots. Therefore, the knowledge of ground-wind circulation and potential long-range transports is fundamental not only to evaluate how local or external sources may affect the air quality at a receptor site but also to quantify it. This review is focussed on establishing the relationships among PM2.5 sources, meteorological condition and air mass origin in the Po Valley, which is one of the most polluted areas in Europe. We have chosen the results from a recent study carried out in Venice (Eastern Po Valley) and have analysed them using different statistical approaches to understand the influence of external and local contribution of PM2.5 sources. External contributions were evaluated by applying Trajectory Statistical Methods (TSMs) based on back-trajectory analysis including (i) back-trajectories cluster analysis, (ii) potential source contribution function (PSCF) and (iii) concentration weighted trajectory (CWT). Furthermore, the relationships between the source contributions and ground-wind circulation patterns were investigated by using (iv) cluster analysis on wind data and (v) conditional probability function (CPF). Finally, local source contribution have been estimated by applying the Lenschow' approach. In summary, the integrated approach of different techniques has successfully identified both local and external sources of particulate matter pollution in a European hot-spot affected by the worst air quality.
Improved Bayesian Infrasonic Source Localization for regional infrasound
Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.
2015-10-20
The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Seeking History: Teaching with Primary Sources in Grades 4-6.
ERIC Educational Resources Information Center
Edinger, Monica
This book offers ideas about using primary sources to enhance students' understandings of history. It discusses the following resources, methods, and ideas: types of primary sources; tips on finding and preparing primary sources for student use; personal, local, and remote history activities; detailed descriptions of diverse projects; guidelines…
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.
Gao, Xiang; Acar, Levent
2016-07-04
This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.
Bioluminescence Tomography–Guided Radiation Therapy for Preclinical Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bin; Wang, Ken Kang-Hsin, E-mail: kwang27@jhmi.edu; Yu, Jingjing
Purpose: In preclinical radiation research, it is challenging to localize soft tissue targets based on cone beam computed tomography (CBCT) guidance. As a more effective method to localize soft tissue targets, we developed an online bioluminescence tomography (BLT) system for small-animal radiation research platform (SARRP). We demonstrated BLT-guided radiation therapy and validated targeting accuracy based on a newly developed reconstruction algorithm. Methods and Materials: The BLT system was designed to dock with the SARRP for image acquisition and to be detached before radiation delivery. A 3-mirror system was devised to reflect the bioluminescence emitted from the subject to a stationarymore » charge-coupled device (CCD) camera. Multispectral BLT and the incomplete variables truncated conjugate gradient method with a permissible region shrinking strategy were used as the optimization scheme to reconstruct bioluminescent source distributions. To validate BLT targeting accuracy, a small cylindrical light source with high CBCT contrast was placed in a phantom and also in the abdomen of a mouse carcass. The center of mass (CoM) of the source was recovered from BLT and used to guide radiation delivery. The accuracy of the BLT-guided targeting was validated with films and compared with the CBCT-guided delivery. In vivo experiments were conducted to demonstrate BLT localization capability for various source geometries. Results: Online BLT was able to recover the CoM of the embedded light source with an average accuracy of 1 mm compared to that with CBCT localization. Differences between BLT- and CBCT-guided irradiation shown on the films were consistent with the source localization revealed in the BLT and CBCT images. In vivo results demonstrated that our BLT system could potentially be applied for multiple targets and tumors. Conclusions: The online BLT/CBCT/SARRP system provides an effective solution for soft tissue targeting, particularly for small, nonpalpable, or orthotopic tumor models.« less
Zheng, Jianwen; Lu, Jing; Chen, Kai
2013-07-01
Several methods have been proposed for the generation of the focused source, usually a virtual monopole source positioned in between the loudspeaker array and the listener. The problem of pre-echoes of the common analytical methods has been noticed, and the most concise method to cope with this problem is the angular weight method. In this paper, the interaural time and level difference, which are well related to the localization cues of human auditory systems, will be used to further investigate the effectiveness of the focused source generation methods. It is demonstrated that the combination of angular weight method and the numerical pressure matching method has comparatively better performance in a given reconstructed area.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
NASA Astrophysics Data System (ADS)
Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.
2005-12-01
Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.
NASA Technical Reports Server (NTRS)
Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Veira, Andreas; Jackson, Peter L; Ainslie, Bruce; Fudge, Dennis
2013-07-01
This study investigates the development and application of a simple method to calculate annual and seasonal PM2.5 and PM10 background concentrations in small cities and rural areas. The Low Pollution Sectors and Conditions (LPSC) method is based on existing measured long-term data sets and is designed for locations where particulate matter (PM) monitors are only influenced by local anthropogenic emission sources from particular wind sectors. The LPSC method combines the analysis of measured hourly meteorological data, PM concentrations, and geographical emission source distributions. PM background levels emerge from measured data for specific wind conditions, where air parcel trajectories measured at a monitoring station are assumed to have passed over geographic sectors with negligible local emissions. Seasonal and annual background levels were estimated for two monitoring stations in Prince George, Canada, and the method was also applied to four other small cities (Burns Lake, Houston, Quesnel, Smithers) in northern British Columbia. The analysis showed reasonable background concentrations for both monitoring stations in Prince George, whereas annual PM10 background concentrations at two of the other locations and PM2.5 background concentrations at one other location were implausibly high. For those locations where the LPSC method was successful, annual background levels ranged between 1.8 +/- 0.1 microg/m3 and 2.5 +/- 0.1 microg/m3 for PM2.5 and between 6.3 +/- 0.3 microg/m3 and 8.5 +/- 0.3 microg/m3 for PM10. Precipitation effects and patterns of seasonal variability in the estimated background concentrations were detectable for all locations where the method was successful. Overall the method was dependent on the configuration of local geography and sources with respect to the monitoring location, and may fail at some locations and under some conditions. Where applicable, the LPSC method can provide a fast and cost-efficient way to estimate background PM concentrations for small cities in sparsely populated regions like northern British Columbia. In rural areas like northern British Columbia, particulate matter (PM) monitoring stations are usually located close to emission sources and residential areas in order to assess the PM impact on human health. Thus there is a lack of accurate PM background concentration data that represent PM ambient concentrations in the absence of local emissions. The background calculation method developed in this study uses observed meteorological data as well as local source emission locations and provides annual, seasonal and precipitation-related PM background concentrations that are comparable to literature values for four out of six monitoring stations.
Blom, Philip Stephen; Marcillo, Omar Eduardo
2016-12-05
A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less
Fatigue crack localization with near-field acoustic emission signals
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Zhang, Yunfeng
2013-04-01
This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source
NASA Astrophysics Data System (ADS)
Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen
2018-05-01
Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.
ERIC Educational Resources Information Center
Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.
2016-01-01
Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
NASA Astrophysics Data System (ADS)
Bowman, K. W.; Lee, M.
2015-12-01
Dramatic changes in the global distribution of emissions over the last decade have fundamentally altered source-receptor pollution impacts. A new generation of low-earth orbiting (LEO) sounders complimented by geostationary sounders over North America, Europe, and Asia providing a unique opportunity to quantify the current and future trajectory of emissions and their impact on global pollution. We examine the potential of this constellation of air quality sounders to quantify the role of local and non-local sources of pollution on background ozone in the US. Based upon an adjoint sensitivity method, we quantify the role synoptic scale transport of non-US pollution on US background ozone over months representative of different source-receptor relationships. This analysis allows us distinguish emission trajectories from megacities, e.g. Beijing, or regions, e.g., western China, from natural trends on downwind ozone. We subsequently explore how a combination of LEO and GEO observations could help quantify the balance of local emissions against changes in distant sources . These results show how this unprecedented new international ozone observing system can monitor the changing structure of emissions and their impact on global pollution.
Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia
NASA Astrophysics Data System (ADS)
Rowell, Colin
We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.
Mapping air quality zones for coastal urban centers.
Freeman, Brian; Gharabaghi, Bahram; Thé, Jesse; Munshed, Mohammad; Faisal, Shah; Abdullah, Meshal; Al Aseed, Athari
2017-05-01
This study presents a new method that incorporates modern air dispersion models allowing local terrain and land-sea breeze effects to be considered along with political and natural boundaries for more accurate mapping of air quality zones (AQZs) for coastal urban centers. This method uses local coastal wind patterns and key urban air pollution sources in each zone to more accurately calculate air pollutant concentration statistics. The new approach distributes virtual air pollution sources within each small grid cell of an area of interest and analyzes a puff dispersion model for a full year's worth of 1-hr prognostic weather data. The difference of wind patterns in coastal and inland areas creates significantly different skewness (S) and kurtosis (K) statistics for the annually averaged pollutant concentrations at ground level receptor points for each grid cell. Plotting the S-K data highlights grouping of sources predominantly impacted by coastal winds versus inland winds. The application of the new method is demonstrated through a case study for the nation of Kuwait by developing new AQZs to support local air management programs. The zone boundaries established by the S-K method were validated by comparing MM5 and WRF prognostic meteorological weather data used in the air dispersion modeling, a support vector machine classifier was trained to compare results with the graphical classification method, and final zones were compared with data collected from Earth observation satellites to confirm locations of high-exposure-risk areas. The resulting AQZs are more accurate and support efficient management strategies for air quality compliance targets effected by local coastal microclimates. A novel method to determine air quality zones in coastal urban areas is introduced using skewness (S) and kurtosis (K) statistics calculated from grid concentrations results of air dispersion models. The method identifies land-sea breeze effects that can be used to manage local air quality in areas of similar microclimates.
Germanium layers grown by zone thermal crystallization from a discrete liquid source
NASA Astrophysics Data System (ADS)
Yatsenko, A. N.; Chebotarev, S. N.; Lozovskii, V. N.; Mohamed, A. A. A.; Erimeev, G. A.; Goncharova, L. M.; Varnavskaya, A. A.
2017-11-01
It is proposed and investigated a method for growing thin uniform germanium layers onto large silicon substrates. The technique uses the hexagonally arranged local sources filled with liquid germanium. Germanium evaporates on very close substrate and in these conditions the residual gases vapor pressure highly reduces. It is shown that to achieve uniformity of the deposited layer better than 97% the critical thickness of the vacuum zone must be equal to l cr = 1.2 mm for a hexagonal arranged system of round local sources with the radius of r = 0.75 mm and the distance between the sources of h = 0.5 mm.
NASA Astrophysics Data System (ADS)
Gai, V. E.; Polyakov, I. V.; Krasheninnikov, M. S.; Koshurina, A. A.; Dorofeev, R. A.
2017-01-01
Currently, the scientific and educational center of the “Transport” of NNSTU performs work on the creation of the universal rescue vehicle. This vehicle is a robot, and intended to reduce the number of human victims in accidents on offshore oil platforms. An actual problem is the development of a method for determining the location of a person overboard in low visibility conditions, when a traditional vision is not efficient. One of the most important sensory robot systems is the acoustic sensor system, because it is omnidirectional and does not require finding of an acoustic source in visibility scope. Features of the acoustic sensor robot system can complement the capabilities of the video sensor in the solution of the problem of localization of a person or some event in the environment. This paper describes the method of determination of the direction of the acoustic source using just one microphone. The proposed method is based on the active perception theory.
Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG
McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.
2008-01-01
EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626
He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun
2014-01-01
This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ciptadi, Gatot; Ihsan, M. Nur; Rahayu, Sri; Widjaja, D. H. K.; Mudawamah, Mudawamah
2017-11-01
The aims of this research are to study the potential source of mature (M-II) oocytes of domestic animals using follicles isolated from prepubertal and over mature aged Indonesian local goats, resulting from an in vitro growth (IVG) method. This method of IVG could provide a new source of M-II oocytes for embryo production. In Indonesia, a very limited number of a good quality oocytes are available for research purposes, as there is a limited number of reproductive females slaughtered, which is dominated by prepubertal and old mature aged animals. IVG culture systems could be improved as an alternative method to provide a new source of a good quality oocytes for in vitro maturation of M-II oocytes. From a number of prepubertal and mature aged goats slaughtered in a local abattoir, the small oocytes in the preantral follicles were cultured in vitro to normal oocyte growth. The methods used in this research are experimental. Follicles were isolated, cultured in vitro for 14 days individually using a sticky medium containing 4% (w/v) polyvinylpyrrolidone in TCM 199 10% Fetal Bovine Serum supplemented with Follicle Stimulating Hormone, which was then evaluated for their follicle development and oocyte quality. The research results showed that a minimum follicle size and oocyte diameter is needed (>100 um) for early evaluation of maturation to be achieved, meanwhile oocytes recovered from IVG after being cultured in vitro for maturation resulted in a very low rate of maturation. However, in the future, IVG of the preantral follicles of Indonesian local goat could be considered as an alternative source of oocytes for both research purposes and embryo production in vitro.
Source localization of temporal lobe epilepsy using PCA-LORETA analysis on ictal EEG recordings.
Stern, Yaki; Neufeld, Miriam Y; Kipervasser, Svetlana; Zilberstein, Amir; Fried, Itzhak; Teicher, Mina; Adi-Japha, Esther
2009-04-01
Localizing the source of an epileptic seizure using noninvasive EEG suffers from inaccuracies produced by other generators not related to the epileptic source. The authors isolated the ictal epileptic activity, and applied a source localization algorithm to identify its estimated location. Ten ictal EEG scalp recordings from five different patients were analyzed. The patients were known to have temporal lobe epilepsy with a single epileptic focus that had a concordant MRI lesion. The patients had become seizure-free following partial temporal lobectomy. A midinterval (approximately 5 seconds) period of ictal activity was used for Principal Component Analysis starting at ictal onset. The level of epileptic activity at each electrode (i.e., the eigenvector of the component that manifest epileptic characteristic), was used as an input for low-resolution tomography analysis for EEG inverse solution (Zilberstain et al., 2004). The algorithm accurately and robustly identified the epileptic focus in these patients. Principal component analysis and source localization methods can be used in the future to monitor the progression of an epileptic seizure and its expansion to other areas.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
Determining the Depth of Infinite Horizontal Cylindrical Sources from Spontaneous Polarization Data
NASA Astrophysics Data System (ADS)
Cooper, G. R. J.; Stettler, E. H.
2017-03-01
Previously published semi-automatic interpretation methods that use ratios of analytic signal amplitudes of orders that differ by one to determine the distance to potential field sources are shown also to apply to self-potential (S.P.) data when the source is a horizontal cylinder. Local minima of the distance (when it becomes closest to zero) give the source depth. The method was applied to an S.P. anomaly from the Bourkes Luck potholes district in Mpumalanga Province, South Africa, and gave results that were confirmed by drilling.
Tang, Wei; Peled, Noam; Vallejo, Deborah I.; Borzello, Mia; Dougherty, Darin D.; Eskandar, Emad N.; Widge, Alik S.; Cash, Sydney S.; Stufflebeam, Steven M.
2018-01-01
Purpose Existing methods for sorting, labeling, registering, and across-subject localization of electrodes in intracranial encephalography (iEEG) may involve laborious work requiring manual inspection of radiological images. Methods We describe a new open-source software package, the interactive electrode localization utility which presents a full pipeline for the registration, localization, and labeling of iEEG electrodes from CT and MR images. In addition, we describe a method to automatically sort and label electrodes from subdural grids of known geometry. Results We validated our software against manual inspection methods in twelve subjects undergoing iEEG for medically intractable epilepsy. Our algorithm for sorting and labeling performed correct identification on 96% of the electrodes. Conclusions The sorting and labeling methods we describe offer nearly perfect performance and the software package we have distributed may simplify the process of registering, sorting, labeling, and localizing subdural iEEG grid electrodes by manual inspection. PMID:27915398
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel
2016-05-01
Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.
Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George
2017-08-15
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Wavelet-based localization of oscillatory sources from magnetoencephalography data.
Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C
2014-08-01
Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.
Ewald, Arne; Avarvand, Forooz Shahbazi; Nolte, Guido
2014-11-01
We introduce a novel method to estimate bivariate synchronization, i.e. interacting brain sources at a specific frequency or band, from MEG or EEG data robust to artifacts of volume conduction. The data driven calculation is solely based on the imaginary part of the cross-spectrum as opposed to the imaginary part of coherency. In principle, the method quantifies how strong a synchronization between a distinct pair of brain sources is present in the data. As an input of the method all pairs of pre-defined locations inside the brain can be used which is computationally exhaustive. In contrast to that, reference sources can be used that have been identified by any source reconstruction technique in a prior analysis step. We introduce different variants of the method and evaluate the performance in simulations. As a particular advantage of the proposed methodology, we demonstrate that the novel approach is capable of investigating differences in brain source interactions between experimental conditions or with respect to a certain baseline. For measured data, we first show the application on resting state MEG data where we find locally synchronized sources in the motor-cortex based on the sensorimotor idle rhythms. Finally, we show an example on EEG motor imagery data where we contrast hand and foot movements. Here, we also find local interactions in the expected brain areas. Copyright © 2014. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.
Heers, Marcel; Hirschmann, Jan; Jacobs, Julia; Dümpelmann, Matthias; Butz, Markus; von Lehe, Marec; Elger, Christian E; Schnitzler, Alfons; Wellmer, Jörg
2014-09-01
Spike-based magnetoencephalography (MEG) source localization is an established method in the presurgical evaluation of epilepsy patients. Focal cortical dysplasias (FCDs) are associated with focal epileptic discharges of variable morphologies in the beta frequency band in addition to single epileptic spikes. Therefore, we investigated the potential diagnostic value of MEG-based localization of spike-independent beta band (12-30Hz) activity generated by epileptogenic lesions. Five patients with FCD IIB underwent MEG. In one patient, invasive EEG (iEEG) was recorded simultaneously with MEG. In two patients, iEEG succeeded MEG, and two patients had MEG only. MEG and iEEG were evaluated for epileptic spikes. Two minutes of iEEG data and MEG epochs with no spikes as well as MEG epochs with epileptic spikes were analyzed in the frequency domain. MEG oscillatory beta band activity was localized using Dynamic Imaging of Coherent Sources. Intralesional beta band activity was coherent between simultaneous MEG and iEEG recordings. Continuous 14Hz beta band power correlated with the rate of interictal epileptic discharges detected in iEEG. In cases where visual MEG evaluation revealed epileptic spikes, the sources of beta band activity localized within <2cm of the epileptogenic lesion as shown on magnetic resonance imaging. This result held even when visually marked epileptic spikes were deselected. When epileptic spikes were detectable in iEEG but not MEG, MEG beta band activity source localization failed. Source localization of beta band activity has the potential to contribute to the identification of epileptic foci in addition to source localization of visually marked epileptic spikes. Thus, this technique may assist in the localization of epileptic foci in patients with suspected FCD. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can
2016-01-01
In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085
Nakajima, Midori; Wong, Simeon; Widjaja, Elysa; Baba, Shiro; Okanishi, Tohru; Takada, Lynne; Sato, Yosuke; Iwata, Hiroki; Sogabe, Maya; Morooka, Hikaru; Whitney, Robyn; Ueda, Yuki; Ito, Tomoshiro; Yagyu, Kazuyori; Ochi, Ayako; Carter Snead, O; Rutka, James T; Drake, James M; Doesburg, Sam; Takeuchi, Fumiya; Shiraishi, Hideaki; Otsubo, Hiroshi
2018-06-01
To investigate whether advanced dynamic statistical parametric mapping (AdSPM) using magnetoencephalography (MEG) can better localize focal cortical dysplasia at bottom of sulcus (FCDB). We analyzed 15 children with diagnosis of FCDB in surgical specimen and 3 T MRI by using MEG. Using AdSPM, we analyzed a ±50 ms epoch relative to each single moving dipole (SMD) and applied summation technique to estimate the source activity. The most active area in AdSPM was defined as the location of AdSPM spike source. We compared spatial congruence between MRI-visible FCDB and (1) dipole cluster in SMD method; and (2) AdSPM spike source. AdSPM localized FCDB in 12 (80%) of 15 children whereas dipole cluster localized six (40%). AdSPM spike source was concordant within seizure onset zone in nine (82%) of 11 children with intracranial video EEG. Eleven children with resective surgery achieved seizure freedom with follow-up period of 1.9 ± 1.5 years. Ten (91%) of them had an AdSPM spike source in the resection area. AdSPM can noninvasively and neurophysiologically localize epileptogenic FCDB, whether it overlaps with the dipole cluster or not. This is the first study to localize epileptogenic FCDB using MEG. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Labazuy, Philippe; Moreau, Frédérique
2004-06-01
This study concerns the fluid flow circulation associated with magmatic intrusion during volcanic eruptions from electrical tomography studies. The objective is to localize and characterize the sources responsible for electrical disturbances during a time evolution survey between 1993 and 1999 of an active volcano, the Piton de la Fournaise. We have applied a dipolar probability tomography and a multi-scale analysis on synthetic and experimental SP data. We show the advantage of the complex continuous wavelet transform which allows to obtain directional information from the phase without a priori information on sources. In both cases, we point out a translation of potential sources through the upper depths during periods preceding a volcanic eruption around specific faults or structural features. The set of parameters obtained (vertical and horizontal localization, multipolar degree and inclination) could be taken into account as criteria to define volcanic precursors.
NASA Astrophysics Data System (ADS)
Bahaweres, R. B.; Mokoginta, S.; Alaydrus, M.
2017-01-01
This paper descnbes a comparison of three methods used to locate the position of the source of deauthentication attacks on Wi-Fi using Chanalyzer, and Wi-Spy 2.4x adapter. The three methods are wardriving, absorption and trilateration. The position of constant deauthentication attacks is more easily analyzed compared to that of random attacks. Signal propagation may provide a comparison between signal strength and distance which makes the position of attackers more easily located. The results are shown on the chart patterns generated from the Received Signal Strength Indicator (RSS). And it is proven that these three methods can be used to localize the position of attackers, and can be recommended for use in the environment of organizations using Wi-Fi.
Improvements to Passive Acoustic Tracking Methods for Marine Mammal Monitoring
2016-05-02
separate and associate calls from individual animals . Marine mammal; Passive acoustic monitoring; Localization; Tracking; Multiple source; Sparse array...position and hydrophone timing offset in addition to animal position Almost all marine mammal tracking methods treat animal position as the only unknown...Workshop on Detection, Classification and Localization (DCL) of Marine Mammals). The animals were expected to be relatively close to the surface
Llinás, Rodolfo R.; Ustinin, Mikhail N.; Rykunov, Stanislav D.; Boyko, Anna I.; Sychev, Vyacheslav V.; Walton, Kerry D.; Rabello, Guilherme M.; Garcia, John
2015-01-01
A new method for the analysis and localization of brain activity has been developed, based on multichannel magnetic field recordings, over minutes, superimposed on the MRI of the individual. Here, a high resolution Fourier Transform is obtained over the entire recording period, leading to a detailed multi-frequency spectrum. Further analysis implements a total decomposition of the frequency components into functionally invariant entities, each having an invariant field pattern localizable in recording space. The method, addressed as functional tomography, makes it possible to find the distribution of magnetic field sources in space. Here, the method is applied to the analysis of simulated data, to oscillating signals activating a physical current dipoles phantom, and to recordings of spontaneous brain activity in 10 healthy adults. In the analysis of simulated data, 61 dipoles are localized with 0.7 mm precision. Concerning the physical phantom the method is able to localize three simultaneously activated current dipoles with 1 mm precision. Spatial resolution 3 mm was attained when localizing spontaneous alpha rhythm activity in 10 healthy adults, where the alpha peak was specified for each subject individually. Co-registration of the functional tomograms with each subject's head MRI localized alpha range activity to the occipital and/or posterior parietal brain region. This is the first application of this new functional tomography to human brain activity. The method successfully provides an overall view of brain electrical activity, a detailed spectral description and, combined with MRI, the localization of sources in anatomical brain space. PMID:26528119
Time-dependent wave splitting and source separation
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck
2017-02-01
Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.
NASA Astrophysics Data System (ADS)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene
2017-03-01
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
Aarabi, A; Grebe, R; Berquin, P; Bourel Ponchel, E; Jalin, C; Fohlen, M; Bulteau, C; Delalande, O; Gondry, C; Héberlé, C; Moullart, V; Wallois, F
2012-06-01
This case study aims to demonstrate that spatiotemporal spike discrimination and source analysis are effective to monitor the development of sources of epileptic activity in time and space. Therefore, they can provide clinically useful information allowing a better understanding of the pathophysiology of individual seizures with time- and space-resolved characteristics of successive epileptic states, including interictal, preictal, postictal, and ictal states. High spatial resolution scalp EEGs (HR-EEG) were acquired from a 2-year-old girl with refractory central epilepsy and single-focus seizures as confirmed by intracerebral EEG recordings and ictal single-photon emission computed tomography (SPECT). Evaluation of HR-EEG consists of the following three global steps: (1) creation of the initial head model, (2) automatic spike and seizure detection, and finally (3) source localization. During the source localization phase, epileptic states are determined to allow state-based spike detection and localization of underlying sources for each spike. In a final cluster analysis, localization results are integrated to determine the possible sources of epileptic activity. The results were compared with the cerebral locations identified by intracerebral EEG recordings and SPECT. The results obtained with this approach were concordant with those of MRI, SPECT and distribution of intracerebral potentials. Dipole cluster centres found for spikes in interictal, preictal, ictal and postictal states were situated an average of 6.3mm from the intracerebral contacts with the highest voltage. Both amplitude and shape of spikes change between states. Dispersion of the dipoles was higher in the preictal state than in the postictal state. Two clusters of spikes were identified. The centres of these clusters changed position periodically during the various epileptic states. High-resolution surface EEG evaluated by an advanced algorithmic approach can be used to investigate the spatiotemporal characteristics of sources located in the epileptic focus. The results were validated by standard methods, ensuring good spatial resolution by MRI and SPECT and optimal temporal resolution by intracerebral EEG. Surface EEG can be used to identify different spike clusters and sources of the successive epileptic states. The method that was used in this study will provide physicians with a better understanding of the pathophysiological characteristics of epileptic activities. In particular, this method may be useful for more effective positioning of implantable intracerebral electrodes. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Ictal and interictal electric source imaging in presurgical evaluation: a prospective study.
Sharma, Praveen; Scherg, Michael; Pinborg, Lars H; Fabricius, Martin; Rubboli, Guido; Pedersen, Birthe; Leffers, Anne-Mette; Uldall, Peter; Jespersen, Bo; Brennum, Jannick; Mølby Henriksen, Otto; Beniczky, Sándor
2018-05-11
Accurate localization of the epileptic focus is essential for surgical treatment of patients with drug- resistant epilepsy. EEG source imaging (ESI) is increasingly used in presurgical evaluation. However, most previous studies analysed interictal discharges. Prospective studies comparing feasibility and accuracy of interictal (II) and ictal (IC) ESI are lacking. We prospectively analysed long-term video EEG recordings (LTM) of patients admitted for presurgical evaluation. We performed ESI of II and IC signals, using two methods: equivalent current dipole (ECD) and distributed source model (DSM). LTM recordings employed the standard 25-electrode array (including inferior temporal electrodes). An age-matched template head-model was used for source analysis. Results were compared with intracranial recordings (ICR), conventional neuroimaging methods (MRI, PET, SPECT) and outcome one year after surgery. Eighty-seven consecutive patients were analysed. ECD gave a significantly higher proportion of patients with localised focal abnormalities (94%) compared to MRI (70%), PET (66%) and SPECT (64%). Agreement between the ESI methods and ICR was moderate to substantial (k=0.56-0.79). Fifty-four patients were operated (47 for more than one year ago) and 62% of them became seizure-free. Localization accuracy of II-ESI was 51% for DSM and 57% for ECD; for IC-ESI this was 51% (DSM) and 62% (ECD). The differences between the ESI methods were not significant. Differences in localization accuracy between ESI and MRI (55%), PET (33%) and SPECT (40%) were not significant. II and IC ESI of LTM-data have high feasibility and their localisation accuracy is similar to the conventional neuroimaging methods. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Ding, Lei; Lai, Yuan; He, Bin
2005-01-01
It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.
Cross-coherent vector sensor processing for spatially distributed glider networks.
Nichols, Brendan; Sabra, Karim G
2015-09-01
Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.
A second catalog of gamma ray bursts: 1978 - 1980 localizations from the interplanetary network
NASA Technical Reports Server (NTRS)
Atteia, J. L.; Barat, C.; Hurley, K.; Niel, M.; Vedrenne, G.; Evans, W. D.; Fenimore, E. E.; Klebesadel, R. W.; Laros, J. G.; Cline, T. L.
1985-01-01
Eighty-two gamma ray bursts were detected between 1978 September 14 and 1980 February 13 by the experiments of the interplanetary network (Prognoz 7, Venera 11 and 12 SIGNE experiments, Pioneer Venus Orbiter, International Sun-Earth Explorer 3, Helios 2, and Vela). Sixty-five of these events have been localized to annuli or error boxes by the method of arrival time analysis. The distribution of sources is consistent with isotropy, and there is no statistically convincing evidence for the detection of more than one burst from any source position. The localizations are compared with those of two previous catalogs.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.
Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization
Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098
Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping
2018-02-16
Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.
Zhou, Yuexi; Wang, Yeyao; Shi, Ping
2018-01-01
Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method
NASA Astrophysics Data System (ADS)
Skinner, M. A.; Ostriker, E. C.
2013-04-01
We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Systematic study of target localization for bioluminescence tomography guided radiation therapy
Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.
2016-01-01
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel
The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.
NASA Astrophysics Data System (ADS)
Xiao, Wenbin; Dong, Wencai
2016-06-01
In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.
Propagation of Gaussian wave packets in complex media and application to fracture characterization
NASA Astrophysics Data System (ADS)
Ding, Yinshuai; Zheng, Yingcai; Zhou, Hua-Wei; Howell, Michael; Hu, Hao; Zhang, Yu
2017-08-01
Knowledge of the subsurface fracture networks is critical in probing the tectonic stress states and flow of fluids in reservoirs containing fractures. We propose to characterize fractures using scattered seismic data, based on the theory of local plane-wave multiple scattering in a fractured medium. We construct a localized directional wave packet using point sources on the surface and propagate it toward the targeted subsurface fractures. The wave packet behaves as a local plane wave when interacting with the fractures. The interaction produces multiple scattering of the wave packet that eventually travels up to the surface receivers. The propagation direction and amplitude of the multiply scattered wave can be used to characterize fracture density, orientation and compliance. Two key aspects in this characterization process are the spatial localization and directionality of the wave packet. Here we first show the physical behaviour of a new localized wave, known as the Gaussian Wave Packet (GWP), by examining its analytical solution originally formulated for a homogenous medium. We then use a numerical finite-difference time-domain (FDTD) method to study its propagation behaviour in heterogeneous media. We find that a GWP can still be localized and directional in space even over a large propagation distance in heterogeneous media. We then propose a method to decompose the recorded seismic wavefield into GWPs based on the reverse-time concept. This method enables us to create a virtually recorded seismic data using field shot gathers, as if the source were an incident GWP. Finally, we demonstrate the feasibility of using GWPs for fracture characterization using three numerical examples. For a medium containing fractures, we can reliably invert for the local parameters of multiple fracture sets. Differing from conventional seismic imaging such as migration methods, our fracture characterization method is less sensitive to errors in the background velocity model. For a layered medium containing fractures, our method can correctly recover the fracture density even with an inaccurate velocity model.
Intrication temporelle et communication quantique
NASA Astrophysics Data System (ADS)
Bussieres, Felix
Quantum communication is the art of transferring a quantum state from one place to another and the study of tasks that can be accomplished with it. This thesis is devoted to the development of tools and tasks for quantum communication in a real-world setting. These were implemented using an underground optical fibre link deployed in an urban environment. The technological and theoretical innovations presented here broaden the range of applications of time-bin entanglement through new methods of manipulating time-bin qubits, a novel model for characterizing sources of photon pairs, new ways of testing non-locality and the design and the first implementation of a new loss-tolerant quantum coin-flipping protocol. Manipulating time-bin qubits. A single photon is an excellent vehicle in which a qubit, the fundamental unit of quantum information, can be encoded. In particular, the time-bin encoding of photonic qubits is well suited for optical fibre transmission. Before this thesis, the applications of quantum communication based on the time-bin encoding were limited due to the lack of methods to implement arbitrary operations and measurements. We have removed this restriction by proposing the first methods to realize arbitrary deterministic operations on time-bin qubits as well as single qubit measurements in an arbitrary basis. We applied these propositions to the specific case of optical measurement-based quantum computing and showed how to implement the feedforward operations, which are essential to this model. This therefore opens new possibilities for creating an optical quantum computer, but also for other quantum communication tasks. Characterizing sources of photon pairs. Experimental quantum communication requires the creation of single photons and entangled photons. These two ingredients can be obtained from a source of photon pairs based on non-linear spontaneous processes. Several tasks in quantum communication require a precise knowledge of the properties of the source being used. We developed and implemented a fast and simple method to characterize a source of photon pairs. This method is well suited for a realistic setting where experimental conditions, such as channel transmittance, may fluctuate, and for which the characterization of the source has to be done in real time. Testing the non-locality of time-bin entanglement. Entanglement is a resource needed for the realization of many important tasks in quantum communication. It also allows two physical systems to be correlated in a way that cannot be explained by classical physics; this manifestation of entanglement is called non-locality. We built a source of time-bin entangled photonic qubits and characterized it with the new methods implementing arbitrary single qubit measurements that we developed. This allowed us to reveal the non-local nature of our source of entanglement in ways that were never implemented before. It also opens the door to study previously untested features of non-locality using this source. Theses experiments were performed in a realistic setting where quantum (non-local) correlations were observed even after transmission of one of the entangled qubits over 12.4 km of an underground optical fibre. Flipping quantum coins. Quantum coin-flipping is a quantum cryptographic primitive proposed in 1984, that is when the very first steps of quantum communication were being taken, where two players alternate in sending classical and quantum information in order to generate a shared random bit. The use of quantum information is such that a potential cheater cannot force the outcome to his choice with certainty. Classically, however, one of the players can always deterministically choose the outcome. Unfortunately, the security of all previous quantum coin-flipping protocols is seriously compromised in the presence of losses on the transmission channel, thereby making this task impractical. We found a solution to this problem and obtained the first loss-tolerant quantum coin-flipping protocol whose security is independent of the amount of the losses. We have also experimentally demonstrated our loss-tolerant protocol using our source of time-bin entanglement combined with our arbitrary single qubit measurement methods. This experiment took place in a realistic setting where qubits travelled over an underground optical fibre link. This new task thus joins quantum key distribution as a practical application of quantum communication. Keywords. quantum communication, photonics, time-bin encoding, source of photon pairs, heralded single photon source, entanglement, non-locality, time-bin entanglement, hybrid entanglement, quantum network, quantum cryptography, quantum coin-flipping, measurement-based quantum computation, telecommunication, optical fibre, nonlinear optics.
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried D.; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of surface evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote geographic sources of surface evaporation for precipitation, based on the implementation of three-dimensional constituent tracers of regional water vapor sources (termed water vapor tracers, WVT) in a general circulation model. The major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In the WVT approach, each tracer is associated with an evaporative source region for a prognostic three-dimensional variable that represents a partial amount of the total atmospheric water vapor. The physical processes that act on a WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be predicted within the model simulation, and can be validated against the model's prognostic water vapor. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional sources, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In most North American continental regions, the local source of precipitation is correlated with total precipitation. There is a general positive correlation between local evaporation and local precipitation, but it can be weaker because large evaporation can occur when precipitation is inhibited. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
NASA Astrophysics Data System (ADS)
Eguchi, T.; Matsubara, K.; Ishida, M.
2001-12-01
To unveil dynamic process associated with three-dimensional unsteady mantle convection, we carried out numerical simulation on passively exerted flows by simplified local hot sources just above the CMB and large-scale cool masses beneath smoothed subduction zones. During the study, we used our individual code developed with the finite difference method. The basic three equations are for the continuity, the motion with the Boussinesq (incompressible) approximation, and the (thermal) energy conservation. The viscosity of our model is sensitive to temperature. To get time integration with high precision, we used the Newton method. In detail, the size and thermal energy of the hot or cool sources are not uniform along the latitude, because we could not select uniform local volumes assigned for the sources within the finite difference grids throughout the mantle. Our results, thus, accompany some latitude dependence. First, we treated the case of the hotspots, neglecting the contribution of the subduction zones. The local hot sources below the currently active hotspots were settled as dynamic driving forces included in the initial condition. Before starting the calculation, we assumed that the mantle was statically layered with zero velocity component. The thermal anomalies inserted instantaneously in the initial condition do excite dynamically passive flows. The type of the initial hot sources was not 'plume' but 'thermal.' The simulation results represent that local upwelling flows which were directly excited over the initial heat sources reached the upper mantle by approximately 30 My during the calculation. Each of the direct upwellings above the hotspots has its own dynamic potential to exert concentric down- and up-welling flows, alternately, at large distances. Simultaneously, the direct upwellings interact mutually within the spherical mantle. As an interesting feature, we numerically observed secondary upwellings somewhere in a wide region covering east Eurasia to the Bering Sea where no hot sources were initially input. It seems that the detailed location of the secondary upwellings depends partly on the numerical parameters such as the radial profile of mantle viscosity especially at the D" layer, etc., because the secondary flows are provoked by dynamic interaction among the distributed direct upwellings just above the CMB. Our results suggest that if we assume not only non-zero time delays during the input of the local hot sources but also parameters related with the difference of their historical surface flux rates, the pattern of the passively excited flows will be different from that obtained with the simultaneously settled hot sources stated above. Second, we simultaneously incorporated simplified thermal anomaly models associated with both the distributed local hotspots and the global subduction zones, as dynamic origins in the initial condition for the static layered mantle. In this case, the simulation result represents that the pattern of secondary radial flows, being different from those in the earlier case, is sensitive to the relative strength between the positive dynamic buoyancy integrated over all of the local hot sources below the hotspots and the total negative buoyancy beneath the subduction zones.
Process to create simulated lunar agglutinate particles
NASA Technical Reports Server (NTRS)
Gustafson, Robert J. (Inventor); Gustafson, Marty A. (Inventor); White, Brant C. (Inventor)
2011-01-01
A method of creating simulated agglutinate particles by applying a heat source sufficient to partially melt a raw material is provided. The raw material is preferably any lunar soil simulant, crushed mineral, mixture of crushed minerals, or similar material, and the heat source creates localized heating of the raw material.
NASA Astrophysics Data System (ADS)
Podhorský, Dušan; Fabo, Peter
2016-12-01
The article deals with a method of acquiring the temporal and spatial distribution of local precipitation from measurement of performance characteristics of local sources of high frequency electromagnetic radiation in the 1-3GHz frequency range in the lower layers of the troposphere up to 100 m. The method was experimentally proven by monitoring the GSM G2 base stations of cell phone providers in the frequency range of 920-960MHz using methods of frequential and spatial diversity reception. Modification of the SART method for localization of precipitation was also proposed. The achieved results allow us to obtain the timeframe of the intensity of local precipitation in the observed area with a temporal resolution of 10 sec. A spatial accuracy of 100m in localization of precipitation is expected, after a network of receivers is built. The acquired data can be used as one of the inputs for meteorological forecasting models, in agriculture, hydrology as a supplementary method to ombrograph stations and measurements for the weather radar network, in transportation as part of a warning system and in many other areas.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
NASA Astrophysics Data System (ADS)
Diamantopoulou, Marianna; Skyllakou, Ksakousti; Pandis, Spyros N.
2016-06-01
The Particulate Matter Source Apportionment Technology (PSAT) algorithm is used together with PMCAMx, a regional chemical transport model, to develop a simple observation-based method (OBM) for the estimation of local and regional contributions of sources of primary and secondary pollutants in urban areas. We test the hypothesis that the minimum of the diurnal average concentration profile of the pollutant is a good estimate of the average contribution of long range transport levels. We use PMCAMx to generate "pseudo-observations" for four different European cities (Paris, London, Milan, and Dusseldorf) and PSAT to estimate the corresponding "true" local and regional contributions. The predictions of the proposed OBM are compared to the "true" values for different definitions of the source area. During winter, the estimates by the OBM for the local contributions to the concentrations of total PM2.5, primary pollutants, and sulfate are within 25% of the "true" contributions of the urban area sources. For secondary organic aerosol the OBM overestimates the importance of the local sources and it actually estimates the contributions of sources within 200 km from the receptor. During summer for primary pollutants and cities with low nearby emissions (ratio of emissions in an area extending 100 km from the city over local emissions lower than 10) the OBM estimates correspond to the city emissions within 25% or so. For cities with relatively high nearby emissions the OBM estimates correspond to emissions within 100 km from the receptor. For secondary PM2.5 components like sulfate and secondary organic aerosol the OBM's estimates correspond to sources within 200 km from the receptor. Finally, for total PM2.5 the OBM provides approximately the contribution of city emissions during the winter and the contribution of sources within 100 km from the receptor during the summer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
NASA Astrophysics Data System (ADS)
Sirota, Dmitry; Ivanov, Vadim
2017-11-01
Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.
NASA Astrophysics Data System (ADS)
Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong
2017-08-01
Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction.
Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C M; Chen, Zhong
2017-08-01
Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction. Copyright © 2017. Published by Elsevier Inc.
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Real-Time Distributed Embedded Oscillator Operating Frequency Monitoring
NASA Technical Reports Server (NTRS)
Pollock, Julie; Oliver, Brett; Brickner, Christopher
2012-01-01
A document discusses the utilization of embedded clocks inside of operating network data links as an auxiliary clock source to satisfy local oscillator monitoring requirements. Modem network interfaces, typically serial network links, often contain embedded clocking information of very tight precision to recover data from the link. This embedded clocking data can be utilized by the receiving device to monitor the local oscillator for tolerance to required specifications, often important in high-integrity fault-tolerant applications. A device can utilize a received embedded clock to determine if the local or the remote device is out of tolerance by using a single link. The local device can determine if it is failing, assuming a single fault model, with two or more active links. Network fabric components, containing many operational links, can potentially determine faulty remote or local devices in the presence of multiple faults. Two methods of implementation are described. In one method, a recovered clock can be directly used to monitor the local clock as a direct replacement of an external local oscillator. This scheme is consistent with a general clock monitoring function whereby clock sources are clocking two counters and compared over a fixed interval of time. In another method, overflow/underflow conditions can be used to detect clock relationships for monitoring. These network interfaces often provide clock compensation circuitry to allow data to be transferred from the received (network) clock domain to the internal clock domain. This circuit could be modified to detect overflow/underflow conditions of the buffering required and report a fast or slow receive clock, respectively.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jingjing; Zhang, Bin; Reyes, Juvenal
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less
Scalp and Source Power Topography in Sleepwalking and Sleep Terrors: A High-Density EEG Study
Castelnovo, Anna; Riedner, Brady A.; Smith, Richard F.; Tononi, Giulio; Boly, Melanie; Benca, Ruth M.
2016-01-01
Study Objectives: To examine scalp and source power topography in sleep arousals disorders (SADs) using high-density EEG (hdEEG). Methods: Fifteen adult subjects with sleep arousal disorders (SADs) and 15 age- and gender-matched good sleeping healthy controls were recorded in a sleep laboratory setting using a 256 channel EEG system. Results: Scalp EEG analysis of all night NREM sleep revealed a localized decrease in slow wave activity (SWA) power (1–4 Hz) over centro-parietal regions relative to the rest of the brain in SADs compared to good sleeping healthy controls. Source modelling analysis of 5-minute segments taken from N3 during the first half of the night revealed that the local decrease in SWA power was prominent at the level of the cingulate, motor, and sensori-motor associative cortices. Similar patterns were also evident during REM sleep and wake. These differences in local sleep were present in the absence of any detectable clinical or electrophysiological sign of arousal. Conclusions: Overall, results suggest the presence of local sleep differences in the brain of SADs patients during nights without clinical episodes. The persistence of similar topographical changes in local EEG power during REM sleep and wakefulness points to trait-like functional changes that cross the boundaries of NREM sleep. The regions identified by source imaging are consistent with the current neurophysiological understanding of SADs as a disorder caused by local arousals in motor and cingulate cortices. Persistent localized changes in neuronal excitability may predispose affected subjects to clinical episodes. Citation: Castelnovo A, Riedner BA, Smith RF, Tononi G, Boly M, Benca RM. Scalp and source power topography in sleepwalking and sleep terrors: a high-density EEG study. SLEEP 2016;39(10):1815–1825. PMID:27568805
NASA Astrophysics Data System (ADS)
Hao, J.; Zhang, J. H.; Yao, Z. X.
2017-12-01
We developed a method to restore the clipped seismic waveforms near epicenter using projection onto convex sets method (Zhang et al, 2016). This method was applied to rescue the local clipped waveforms of 2013 Mw 6.6 Lushan earthquake. We restored 88 out of 93 clipped waveforms of 38 broadband seismic stations of China Earthquake Networks (CEN). The epicenter distance of the nearest station to the epicenter that we can faithfully restore is only about 32 km. In order to investigate if the source parameters of earthquake could be determined exactly with the restored data, restored waveforms are utilized to get the mechanism of Lushan earthquake. We apply the generalized reflection-transmission coefficient matrix method to calculate the synthetic seismic records and simulated annealing method in inversion (Yao and Harkrider, 1983; Hao et al., 2012). We select 5 stations of CEN with the epicenter distance about 200km whose records aren't clipped and three-component velocity records are used. The result shows the strike, dip and rake angles of Lushan earthquake are 200o, 51o and 87o respectively, hereinafter "standard result". Then the clipped and restored seismic waveforms are applied respectively. The strike, dip and rake angles of clipped seismic waveforms are 184o, 53o and 72o respectively. The largest misfit of angle is 16o. In contrast, the strike, dip and rake angles of restored seismic waveforms are 198o, 51o and 87o respectively. It is very close to the "standard result". We also study the rupture history of Lushan earthquake constrained with the restored local broadband and teleseismic waves based on finite fault method (Hao et al., 2013). The result consists with that constrained with the strong motion and teleseismic waves (Hao et al., 2013), especially the location of the patch with larger slip. In real-time seismology, determining the source parameters as soon as possible is important. This method will help us to determine the mechanism of earthquake using the local clipped waveforms. Strong motion stations in China don't have good coverage at present. This method will help us to investigate the rupture history of large earthquake in China using the local clipped data of broadband stations.
NASA Astrophysics Data System (ADS)
Fisher, R. E.; Lowry, D.; France, J.; Lanoiselle, M.; Zazzeri, G.; Nisbet, E. G.
2012-12-01
Different methane sources have different δ13CCH4 and δDCH4 signatures, which potentially provides a powerful constraint on models of methane emission budgets. However source signatures remain poorly known and need to be studied in more detail if isotopic measurements of ambient air are to be used to constrain regional and global emissions. The Keeling plot method (plotting δ13CCH4 or δDCH4 against 1/CH4 concentration in samples of ambient air in the close vicinity of known sources) directly assesses the source signature of the methane that is actually emitted to the air. This contrasts with chamber studies, measuring air within a chamber, where local micro-meteorological and microbiological processes are occurring. Keeling plot methods have been applied to a wide variety of settings in this study. The selection of appropriate background measurements for Keeling plot analysis is also considered. The method has been used on a local scale to identify the source signature of summer emissions from subarctic wetlands in Fennoscandia. Samples are collected from low height (0.3-3m) over the wetlands during 24-hour periods, to collect daily emissions maxima (warm late afternoons), inversion maxima (at the coldest time of the 24hr daylight: usually earliest morning), and ambient minima when mixing occurs (often mid afternoon). Some results are comparable to parallel chamber studies, but in other cases there are small but significant shifts between CH4 in chamber air and CH4 that is dispersing in the above-ground air. On a regional to continental scale the isotopic signature of bulk sources of emissions can be identified using Keeling plots. The methodology is very applicable for use in urban and urban-rural settings. For example, the winter SE monsoon sweeps from inland central Asia over China to Hong Kong. Application of back trajectory analysis and Keeling plot methods implied coal emissions may be a significant Chinese source of methane in January, although in other months biological sources dominate. Similarly, in London the method has been used to test the London methane emission inventory.
Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays
NASA Astrophysics Data System (ADS)
Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.
2018-03-01
Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.
NASA Astrophysics Data System (ADS)
Antony Chen, L.-W.; Doddridge, Bruce G.; Dickerson, Russell R.; Chow, Judith C.; Henry, Ronald C.
Chemically speciated fine particulate matter (PM 2.5) and trace gases (including NH 3, HNO 3, CO, SO 2, NO y) have been sampled at Fort Meade (FME: 39.10°N, 76.74°W; elevation 46 m MSL), Maryland, since July 1999. FME is suburban, located in the middle of the Baltimore-Washington corridor, and generally downwind of the highly industrialized Midwest. The PM 2.5 at FME is expected to be of both local and regional sources. Measurements over a 2-year period include eight seasonally representative months. The PM 2.5 shows an annual mean of 13 μg m -3 and primarily consists of sulfate, nitrate, ammonium, and carbonaceous material. Day-to-day and seasonal variations in the PM 2.5 chemical composition reflect changes of contribution from various sources. UNMIX, an innovative receptor model, is used to retrieve potential sources of the PM 2.5. A six-factor model, including regional sulfate, local sulfate, wood smoke, copper/iron processing industry, mobile, and secondary nitrate, is constructed and compared with reported source emission profiles. The six factors are studied further using an ensemble back trajectory method to identify possible source locations. Sources of local sulfate, mobile, and secondary nitrate are more localized around the receptor than those of other factors. Regional sulfate and wood smoke are more regional and associated with westerly and southerly transport, respectively. This study suggests that the local contribution to PM 2.5 mass can vary from <30% in summer to >60% in winter.
NASA Astrophysics Data System (ADS)
Shairsingh, Kerolyn K.; Jeong, Cheol-Heon; Wang, Jonathan M.; Evans, Greg J.
2018-06-01
Vehicle emissions represent a major source of air pollution in urban districts, producing highly variable concentrations of some pollutants within cities. The main goal of this study was to identify a deconvolving method so as to characterize variability in local, neighbourhood and regional background concentration signals. This method was validated by examining how traffic-related and non-traffic-related sources influenced the different signals. Sampling with a mobile monitoring platform was conducted across the Greater Toronto Area over a seven-day period during summer 2015. This mobile monitoring platform was equipped with instruments for measuring a wide range of pollutants at time resolutions of 1 s (ultrafine particles, black carbon) to 20 s (nitric oxide, nitrogen oxides). The monitored neighbourhoods were selected based on their land use categories (e.g. industrial, commercial, parks and residential areas). The high time-resolution data allowed pollutant concentrations to be separated into signals representing background and local concentrations. The background signals were determined using a spline of minimums; local signals were derived by subtracting the background concentration from the total concentration. Our study showed that temporal scales of 500 s and 2400 s were associated with the neighbourhood and regional background signals respectively. The percent contribution of the pollutant concentration that was attributed to local signals was highest for nitric oxide (NO) (37-95%) and lowest for ultrafine particles (9-58%); the ultrafine particles were predominantly regional (32-87%) in origin on these days. Local concentrations showed stronger associations than total concentrations with traffic intensity in a 100 m buffer (ρ:0.21-0.44). The neighbourhood scale signal also showed stronger associations with industrial facilities than the total concentrations. Given that the signals show stronger associations with different land use suggests that resolving the ambient concentrations differentiates which emission sources drive the variability in each signal. The benefit of this deconvolution method is that it may reduce exposure misclassification when coupled with predictive models.
Forecasting the Revenues of Local Public Health Departments in the Shadows of the ‘Great Recession’
Reschovsky, Andrew; Zahner, Susan J.
2015-01-01
Context The ability of local health departments (LHD) to provide core public health services depends on a reliable stream of revenue from federal, state, and local governments. This study investigates the impact of the “Great Recession” on major sources of LHD revenues and develops a fiscal forecasting model to predict revenues to LHDs in one state over the period 2012 to 2014. Economic forecasting offers a new financial planning tool for LHD administrators and local government policy-makers. This study represents a novel research application for these econometric methods. Methods Detailed data on revenues by source for each LHD in Wisconsin were taken from annual surveys conducted by the Wisconsin Department of Health Services over an eight year period (2002-2009). A forecasting strategy appropriate for each revenue source was developed resulting in “base case” estimates. An analysis of the sensitivity of these revenue forecasts to a set of alternative fiscal policies by the federal, state, and local governments was carried out. Findings The model forecasts total LHD revenues in 2012 of $170.5 million (in 2010 dollars). By 2014 inflation-adjusted revenues will decline by $8 million, a reduction of 4.7 percent. Because of population growth, per capita real revenues of LHDs are forecast to decline by 6.6 percent between 2012 and 2014. There is a great deal of uncertainty about the future of federal funding in support of local public health. A doubling of the reductions in federal grants scheduled under current law would result in an additional $4.4 million decline in LHD revenues in 2014. Conclusions The impact of the Great Recession continues to haunt LHDs. Multi-year revenue forecasting offers a new financial tool to help LHDs better plan for an environment of declining resources. New revenue sources are needed if sharp drops in public health service delivery are to be avoided. PMID:23531611
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
NASA Astrophysics Data System (ADS)
Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.
2008-02-01
Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B; Reyes, J; Wong, J
Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
NASA Astrophysics Data System (ADS)
Luo, L.; Cheng, Z.
2017-12-01
Secondary inorganic aerosols (SNA), i.e., sulfate, nitrate and ammonium, account for over 50% of fine particulate matter (PM2.5) during heavy haze episodes over Yangtze River Delta (YRD) region of China. Understanding the origin and transport of SNA is crucial for alleviating haze pollution over YRD. The long range transport from outer-YRD regions had significant influence on SNA during haze episodes over YRD, especially in winter. However, previous studies only using single domain for source analysis are limited on quantifying the local and transported sources in province scale altogether. In this study, the Integrated Source Apportionment Method (ISAM) based on the Weather Research and Forecasting and Community Multi-scale Air Quality (WRF-CMAQ) models was performed to two nested domains, one covering east of China and the other embracing YRD, for source apportionment of SNA in YRD during January, 2015. The results indicated that the outer-YRD transport mainly from upwind northwestern provinces, Shandong and Henan, was the dominant contributor accounting for 36.2% of sulfate during pollution episodes. For nitrate, inner-YRD and outer-YRD transport were the two evenly major regional sources, contributing 51.9% of nitrate during hazes. However, local accumulation was the first contributor accounting for 73.9% of ammonium. The long lifetime of formation process for sulfate and nitrate caused the conspicuous transport effect driven by wind when adjacent regions under severe pollution. Although the total effects of long and short distant transport played a major role for the level of sulfate and nitrate, the extent of contribution from local accumulation was similar with them even larger in province scale. Industry followed by power plant were two principal sources of sulfate for all three types of regional contribution. The main sectoral sources of nitrate were industry and transport for local accumulation while power plant besides them for inner-YRD and outer-YRD transport. For ammonium, volatile sources were major origin for local accumulation while agriculture for inner-YRD transport. These results demonstrate the importance for outer-YRD control during haze episodes for sulfate and nitrate while local emission control for ammonium in YRD.
Revenue and Expenditure Projections for the Albuquerque Public Schools. Final Report.
ERIC Educational Resources Information Center
Pleyte, Parrie S.; Kohl, Bruce R.
This report is part of a 10-city national study of revenues and expenditures shared by a local government. The purpose of the study is to project operating revenues and expenditures of the Albuquerque public schools through 1975. The revenue projection includes all sources and uses various methods for estimating Federal, State, and local revenue.…
Localization and cooperative communication methods for cognitive radio
NASA Astrophysics Data System (ADS)
Duval, Olivier
We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS) condition, increasing the localization error, even when the AOA estimate is accurate. We present a real-time localization solver (RTLS) for time-of-arrival (TOA) estimates using ray-tracing methods on the map of the geometry of walls and compare its performance with classical TOA trilateration localization methods. Extensive simulations and field trials for indoor environments show that our method increases the coverage area from 1.9% of the floor to 82.3 % and the accuracy by a 10-fold factor when compared with trilateration. We implemented our ray tracing model in C++ using the CGAL computational geometry algorithm library. We illustrate the real-time property of our RTLS that performs most ray tracing tasks in a preprocessing phase with time and space complexity analyses and profiling of our software.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt
A system and method for investigating rock formations outside a borehole are provided. The method includes generating a first compressional acoustic wave at a first frequency by a first acoustic source; and generating a second compressional acoustic wave at a second frequency by a second acoustic source. The first and the second acoustic sources are arranged within a localized area of the borehole. The first and the second acoustic waves intersect in an intersection volume outside the borehole. The method further includes receiving a third shear acoustic wave at a third frequency, the third shear acoustic wave returning to themore » borehole due to a non-linear mixing process in a non-linear mixing zone within the intersection volume at a receiver arranged in the borehole. The third frequency is equal to a difference between the first frequency and the second frequency.« less
Evaluation of coded aperture radiation detectors using a Bayesian approach
NASA Astrophysics Data System (ADS)
Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur
2016-12-01
We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.
NASA Astrophysics Data System (ADS)
Kong, Xiangzhen; He, Wei; Qin, Ning; He, Qishuang; Yang, Bin; Ouyang, Huiling; Wang, Qingmei; Xu, Fuliu
2013-03-01
Trajectory cluster analysis, including the two-stage cluster method based on Euclidean metrics and the one-stage clustering method based on Mahalanobis metrics and self-organizing maps (SOM), was applied and compared to identify the transport pathways of PM10 for the cities of Chaohu and Hefei, both located near Lake Chaohu in China. The two-stage cluster method was modified to further investigate the long trajectories in the second stage in order to eliminate the observed disaggregation among them. Twelve trajectory clusters were identified for both cities. The one-stage clustering method based on Mahalanobis metrics gives the best performance regarding the variances within clusters. The results showed that local PM10 emission was one of the most important sources in both cities and that the local emission in Hefei was higher than in Chaohu. In addition, Chaohu suffered greater effects from the eastern region (Yangtze River Delta, YRD) than Hefei. On the other hand, the long-range transportation from the northwestern pathway had a higher influence on the PM10 level in Hefei. Receptor models, including potential source contribution function (PSCF) and residence time weighted concentrations (RTWC), were utilized to identify the potential source locations of PM10 for both cities. However, the combined PSCF and RTWC results for the two cities provided PM10 source locations that were more consistent with the results of transport pathways and the total anthropogenic PM10 emission inventory. This indicates that the combined method's ability to identify the source regions is superior to that of the individual PSCF or RTWC methods. Henan and Shanxi Provinces and the YRD were important PM10 source regions for the two cities, but the Henan and Shanxi area was more important for Hefei than for Chaohu, while the YRD region was less important. In addition, the PSCF, RTWC and the combined results all had higher correlation coefficients with PM10 emission from traffic than from industry, electricity generation or residential sources, suggesting the relatively higher contribution of traffic emissions to the PM10 pollution in Lake Chaohu.
Legal and financial methods for reducing low emission sources: Options for incentives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samitowski, W.
1995-12-31
There are two types of the so-called low emission sources in Cracow: over 1,000 local boiler houses and several thousand solid fuel-fired stoves. The accomplishment of each of 5 sub-projects offered under the American-Polish program entails solving the technical, financial, legal and public relations-related problems. The elimination of the low emission source requires, therefore, a joint effort of the following pairs: (a) local authorities, (b) investors, (c) owners and users of low emission sources, and (d) inhabitants involved in particular projects. The results of the studies developed by POLINVEST indicate that the accomplishment of the projects for the elimination ofmore » low emission sources will require financial incentives. Bearing in mind the today`s resources available from the community budget, this process may last as long as a dozen or so years. The task of the authorities of Cracow City is making a long-range operational strategy enabling reduction of low emission sources in Cracow.« less
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
An exact noniterative linear method for locating sources based on measuring receiver arrival times.
Militello, C; Buenafuente, S R
2007-06-01
In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Borland, Michael
Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.
MTSAT: Full Disk - NOAA GOES Geostationary Satellite Server
GOES Himawari-8 Indian Ocean Meteosat HEMISPHERIC GOES Atlantic Source | Local GOES West Himawari-8 Meteosat CONTINENTAL PACUS CONUS Source | Local REGIONAL GOES-West Northwest West Central Southwest GOES -East Regional Page Source | Local Pacific Northwest Source | Local Northern Rockies Source | Local
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
Design and optimization of a brachytherapy robot
NASA Astrophysics Data System (ADS)
Meltsner, Michael A.
Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.
Treatment of Atrial Fibrillation By The Ablation Of Localized Sources
Narayan, Sanjiv M.; Krummen, David E.; Shivkumar, Kalyanam; Clopton, Paul; Rappel, Wouter-Jan; Miller, John M.
2012-01-01
Objectives We hypothesized that human atrial fibrillation (AF) may be sustained by localized sources (electrical rotors and focal impulses), whose elimination (Focal Impulse and Rotor Modulation, FIRM) may improve outcome from AF ablation. Background Catheter ablation for AF is a promising therapy, whose success is limited in part by uncertainty in the mechanisms that sustain AF. We developed a computational approach to map whether AF is sustained by several meandering waves (the prevailing hypothesis) or localized sources, then prospectively tested whether targeting patient-specific mechanisms revealed by mapping would improve AF ablation outcome. Methods We recruited 92 individuals during 107 consecutive ablation procedures for paroxysmal or persistent (72%) AF. Cases were prospectively treated, in a 2-arm 1:2 design, by ablation at sources (FIRM-Guided) followed by conventional ablation (n=36), or conventional ablation alone (n=71; FIRM-Blinded). Results Localized rotors or focal impulses were detected in 98 (97%) of 101 cases with sustained AF, each exhibiting 2.1±1.0 sources. The acute endpoint (AF termination or consistent slowing) was achieved in 86% of FIRM-guided versus 20% of FIRM-Blinded cases (p<0.001). FIRM ablation alone at the primary source terminated AF in 2.5 minutes (median; IQR 1.0–3.1). Total ablation time did not differ between groups (57.8±22.8 versus 52.1±17.8 minutes, p=0.16). During 273 days (median; IQR 132–681 days) after a single procedure, FIRM-Guided cases had higher freedom from AF (82.4% versus 44.9%; p<0.001) after a single procedure than FIRM-blinded cases with rigorous, often implanted, ECG monitoring. Adverse events did not differ between groups. CONCLUSIONS Localized electrical rotors and focal impulse sources are prevalent sustaining-mechanisms for human AF. FIRM ablation at patient-specific sources acutely terminated or slowed AF, and improved outcome. These results offer a novel mechanistic framework and treatment paradigm for AF. (ClinicalTrials.gov number, NCT01008722) PMID:22818076
Suspended-sediment sources in an urban watershed, Northeast Branch Anacostia River, Maryland
Devereux, Olivia H.; Prestegaard, Karen L.; Needelman, Brian A.; Gellis, Allen C.
2010-01-01
Fine sediment sources were characterized by chemical composition in an urban watershed, the Northeast Branch Anacostia River, which drains to the Chesapeake Bay. Concentrations of 63 elements and two radionuclides were measured in possible land-based sediment sources and suspended sediment collected from the water column at the watershed outlet during storm events. These tracer concentrations were used to determine the relative quantity of suspended sediment contributed by each source. Although this is an urbanized watershed, there was not a distinct urban signature that can be evaluated except for the contributions from road surfaces. We identified the sources of fine sediment by both physiographic province (Piedmont and Coastal Plain) and source locale (streambanks, upland and street residue) by using different sets of elemental tracers. The Piedmont contributed the majority of the fine sediment for seven of the eight measured storms. The streambanks contributed the greatest quantity of fine sediment when evaluated by source locale. Street residue contributed 13% of the total suspended sediment on average and was the source most concentrated in anthropogenically enriched elements. Combining results from the source locale and physiographic province analyses, most fine sediment in the Northeast Branch watershed is derived from streambanks that contain sediment eroded from the Piedmont physiographic province of the watershed. Sediment fingerprinting analyses are most useful when longer term evaluations of sediment erosion and storage are also available from streambank-erosion measurements, sediment budget and other methods.
Treatment of internal sources in the finite-volume ELLAM
Healy, R.W.; ,; ,; ,; ,; ,
2000-01-01
The finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) is a mass-conservative approach for solving the advection-dispersion equation. The method has been shown to be accurate and efficient for solving advection-dominated problems of solute transport in ground water in 1, 2, and 3 dimensions. Previous implementations of FVELLAM have had difficulty in representing internal sources because the standard assumption of lowest order Raviart-Thomas velocity field does not hold for source cells. Therefore, tracking of particles within source cells is problematic. A new approach has been developed to account for internal sources in FVELLAM. It is assumed that the source is uniformly distributed across a grid cell and that instantaneous mixing takes place within the cell, such that concentration is uniform across the cell at any time. Sub-time steps are used in the time-integration scheme to track mass outflow from the edges of the source cell. This avoids the need for tracking within the source cell. We describe the new method and compare results for a test problem with a wide range of cell Peclet numbers.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
NASA Astrophysics Data System (ADS)
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
NASA Astrophysics Data System (ADS)
Hadi, Nik Azran Ab; Rashid, Wan Norhisyam Abd; Hashim, Nik Mohd Zarifie; Mohamad, Najmiah Radiah; Kadmin, Ahmad Fauzan
2017-10-01
Electricity is the most powerful energy source in the world. Engineer and technologist combined and cooperated to invent a new low-cost technology and free carbon emission where the carbon emission issue is a major concern now due to global warming. Renewable energy sources such as hydro, wind and wave are becoming widespread to reduce the carbon emissions, on the other hand, this effort needs several novel methods, techniques and technologies compared to coal-based power. Power quality of renewable sources needs in depth research and endless study to improve renewable energy technologies. The aim of this project is to investigate the impact of renewable electric generator on its local distribution system. The power farm was designed to connect to the local distribution system and it will be investigated and analyzed to make sure that energy which is supplied to customer is clean. The MATLAB tools are used to simulate the overall analysis. At the end of the project, a summary of identifying various voltage fluctuates data sources is presented in terms of voltage flicker. A suggestion of the analysis impact of wave power generation on its local distribution is also presented for the development of wave generator farms.
Improvements to Passive Acoustic Tracking Methods for Marine Mammal Monitoring
2016-05-02
individual animals . 15. SUBJECT TERMS Marine mammal; Passive acoustic monitoring ; Localization; Tracking ; Multiple source ; Sparse array 16. SECURITY...al. 2004; Thode 2005; Nosal 2007] to localize animals in situations where straight-line propagation assumptions made by conventional marine mammal...Objective 1: Inveti for sound speed profiles. hydrophone position and hydrophone timing offset in addition to animal position Almost all marine mammal
A Study of Regional Wave Source Time Functions of Central Asian Earthquakes
NASA Astrophysics Data System (ADS)
Xie, J.; Perry, M. R.; Schult, F. R.; Wood, J.
2014-12-01
Despite the extensive use of seismic regional waves in seismic event identification and attenuation tomography, very little is known on how seismic sources radiate energy into these waves. For example, whether regional Lg wave has the same source spectrum as that of the local S has been questioned by Harr et al. and Frenkel et al. three decades ago; many current investigators assume source spectra in Lg, Sn, Pg, Pn and Lg coda waves have either the same or very similar corner frequencies, in contrast to local P and S spectra whose corner frequencies differ. The most complete information on how the finite source ruptures radiate energy into regional waves is contained in the time domain source time functions (STFs). To estimate the STFs of regional waves using the empirical Green's function (EGF) method, we have been substantially modifying a semi-automotive computer procedure to cope with the increasingly diverse and inconsistent naming patterns of new data files from the IRIS DMC. We are applying the modified procedure to many earthquakes in central Asia to study the STFs of various regional waves to see whether they have the same durations and pulse shapes, and how frequently source directivity occur. When applicable, we also examine the differences between STFs of local P and S waves and those of regional waves. The result of these analyses will be presented at the meeting.
NASA Astrophysics Data System (ADS)
Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie
2018-01-01
Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.
Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method
NASA Astrophysics Data System (ADS)
Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun
2017-10-01
Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.
NASA Astrophysics Data System (ADS)
Soraisam, Monika D.; Gilfanov, Marat; Kupfer, Thomas; Masci, Frank; Shafter, Allen W.; Prince, Thomas A.; Kulkarni, Shrinivas R.; Ofek, Eran O.; Bellm, Eric
2017-03-01
Context. In the present era of large-scale surveys in the time domain, the processing of data, from procurement up to the detection of sources, is generally automated. One of the main challenges in the astrophysical analysis of their output is contamination by artifacts, especially in the regions of high surface brightness of unresolved emission. Aims: We present a novel method for identifying candidates for variable and transient sources from the outputs of optical time-domain survey data pipelines. We use the method to conduct a systematic search for novae in the intermediate Palomar Transient Factory (iPTF) observations of the bulge part of M 31 during the second half of 2013. Methods: We demonstrate that a significant fraction of artifacts produced by the iPTF pipeline form a locally uniform background of false detections approximately obeying Poissonian statistics, whereas genuine variable and transient sources, as well as artifacts associated with bright stars, result in clusters of detections whose spread is determined by the source localization accuracy. This makes the problem analogous to source detection on images produced by grazing incidence X-ray telescopes, enabling one to utilize the arsenal of powerful tools developed in X-ray astronomy. In particular, we use a wavelet-based source detection algorithm from the Chandra data analysis package CIAO. Results: Starting from 2.5 × 105 raw detections made by the iPTF data pipeline, we obtain approximately 4000 unique source candidates. Cross-matching these candidates with the source-catalog of a deep reference image of the same field, we find counterparts for 90% of the candidates. These sources are either artifacts due to imperfect PSF matching or genuine variable sources. The remaining approximately 400 detections are transient sources. We identify novae among these candidates by applying selection cuts to their lightcurves based on the expected properties of novae. Thus, we recovered all 12 known novae (not counting one that erupted toward the end of the survey) registered during the time span of the survey and discovered three nova candidates. Our method is generic and can be applied to mining any target out of the artifacts in optical time-domain data. As it is fully automated, its incompleteness can be accurately computed and corrected for.
Spectral spatiotemporal imaging of cortical oscillations and interactions in the human brain
Lin, Fa-Hsuan; Witzel, Thomas; Hämäläinen, Matti S.; Dale, Anders M.; Belliveau, John W.; Stufflebeam, Steven M.
2010-01-01
This paper presents a computationally efficient source estimation algorithm that localizes cortical oscillations and their phase relationships. The proposed method employs wavelet-transformed magnetoencephalography (MEG) data and uses anatomical MRI to constrain the current locations to the cortical mantle. In addition, the locations of the sources can be further confined with the help of functional MRI (fMRI) data. As a result, we obtain spatiotemporal maps of spectral power and phase relationships. As an example, we show how the phase locking value (PLV), that is, the trial-by-trial phase relationship between the stimulus and response, can be imaged on the cortex. We apply the method to spontaneous, evoked, and driven cortical oscillations measured with MEG. We test the method of combining MEG, structural MRI, and fMRI using simulated cortical oscillations along Heschl’s gyrus (HG). We also analyze sustained auditory gamma-band neuromagnetic fields from MEG and fMRI measurements. Our results show that combining the MEG recording with fMRI improves source localization for the non-noise-normalized wavelet power. In contrast, noise-normalized spectral power or PLV localization may not benefit from the fMRI constraint. We show that if the thresholds are not properly chosen, noise-normalized spectral power or PLV estimates may contain false (phantom) sources, independent of the inclusion of the fMRI prior information. The proposed algorithm can be used for evoked MEG/EEG and block-designed or event-related fMRI paradigms, or for spontaneous MEG data sets. Spectral spatiotemporal imaging of cortical oscillations and interactions in the human brain can provide further understanding of large-scale neural activity and communication between different brain regions. PMID:15488408
The local density of optical states of a metasurface
NASA Astrophysics Data System (ADS)
Lunnemann, Per; Koenderink, A. Femius
2016-02-01
While metamaterials are often desirable for near-field functions, such as perfect lensing, or cloaking, they are often quantified by their response to plane waves from the far field. Here, we present a theoretical analysis of the local density of states near lattices of discrete magnetic scatterers, i.e., the response to near field excitation by a point source. Based on a pointdipole theory using Ewald summation and an array scanning method, we can swiftly and semi-analytically evaluate the local density of states (LDOS) for magnetoelectric point sources in front of an infinite two-dimensional (2D) lattice composed of arbitrary magnetoelectric dipole scatterers. The method takes into account radiation damping as well as all retarded electrodynamic interactions in a self-consistent manner. We show that a lattice of magnetic scatterers evidences characteristic Drexhage oscillations. However, the oscillations are phase shifted relative to the electrically scattering lattice consistent with the difference expected for reflection off homogeneous magnetic respectively electric mirrors. Furthermore, we identify in which source-surface separation regimes the metasurface may be treated as a homogeneous interface, and in which homogenization fails. A strong frequency and in-plane position dependence of the LDOS close to the lattice reveals coupling to guided modes supported by the lattice.
Mantini, D; Franciotti, R; Romani, G L; Pizzella, V
2008-03-01
The major limitation for the acquisition of high-quality magnetoencephalography (MEG) recordings is the presence of disturbances of physiological and technical origins: eye movements, cardiac signals, muscular contractions, and environmental noise are serious problems for MEG signal analysis. In the last years, multi-channel MEG systems have undergone rapid technological developments in terms of noise reduction, and many processing methods have been proposed for artifact rejection. Independent component analysis (ICA) has already shown to be an effective and generally applicable technique for concurrently removing artifacts and noise from the MEG recordings. However, no standardized automated system based on ICA has become available so far, because of the intrinsic difficulty in the reliable categorization of the source signals obtained with this technique. In this work, approximate entropy (ApEn), a measure of data regularity, is successfully used for the classification of the signals produced by ICA, allowing for an automated artifact rejection. The proposed method has been tested using MEG data sets collected during somatosensory, auditory and visual stimulation. It was demonstrated to be effective in attenuating both biological artifacts and environmental noise, in order to reconstruct clear signals that can be used for improving brain source localizations.
Source localization of non-stationary acoustic data using time-frequency analysis
NASA Astrophysics Data System (ADS)
Stoughton, Jack; Edmonson, William
2005-04-01
An improvement in temporal locality of the generalized cross-correlation (GCC) for angle of arrival (AOA) estimation can be achieved by employing 2-D cross-correlation of infrasonic sensor data transformed to its time-frequency (TF) representation. Intermediate to the AOA evaluation is the time delay between pairs of sensors. The signal class of interest includes far field sources which are partially coherent across the array, nonstationary, and wideband. In addition, signals can occur as multiple short bursts, for which TF representations may be more appropriate for time delay estimation. The GCC tends to smooth out such temporal energy bursts. Simulation and experimental results will demonstrate the improvement in using a TF-based GCC, using the Cohen class, over the classic GCC method. Comparative demonstration of the methods will be performed on data captured on an infrasonic sensor array located at NASA Langley Research Center (LaRC). The infrasonic data sources include Delta IV and Space Shuttle launches from Kennedy Space Center which belong to the stated signal class. Of interest is to apply this method to the AOA estimation of atmospheric turbulence. [Work supported by NASA LaRC Creativity and Innovation project: Infrasonic Detection of Clear Air Turbulence and Severe Storms.
McGary, John E; Xiong, Zubiao; Chen, Ji
2013-07-01
TomoTherapy systems lack real-time, tumor tracking. A possible solution is to use electromagnetic markers; however, eddy-current magnetic fields generated in response to a magnetic source can be comparable to the signal, thus degrading the localization accuracy. Therefore, the tracking system must be designed to account for the eddy fields created along the inner bore conducting surfaces. The aim of this work is to investigate localization accuracy using magnetic field gradients to determine feasibility toward TomoTherapy applications. Electromagnetic models are used to simulate magnetic fields created by a source and its simultaneous generation of eddy currents within a conducting cylinder. The source position is calculated using a least-squares fit of simulated sensor data using the dipole equation as the model equation. To account for field gradients across the sensor area (≈ 25 cm(2)), an iterative method is used to estimate the magnetic field at the sensor center. Spatial gradients are calculated with two arrays of uniaxial, paired sensors that form a gradiometer array, where the sensors are considered ideal. Experimental measurements of magnetic fields within the TomoTherapy bore are shown to be 1%-10% less than calculated with the electromagnetic model. Localization results using a 5 × 5 array of gradiometers are, in general, 2-4 times more accurate than a planar array of sensors, depending on the solenoid orientation and position. Simulation results show that the localization accuracy using a gradiometer array is within 1.3 mm over a distance of 20 cm from the array plane. In comparison, localization errors using single array are within 5 mm. The results indicate that the gradiometer method merits further studies and work due to the accuracy achieved with ideal sensors. Future studies should include realistic sensor models and extensive numerical studies to estimate the expected magnetic tracking accuracy within a TomoTherapy system before proceeding with prototype development.
Oxygen, Neon, and Iron X-Ray Absorption in the Local Interstellar Medium
NASA Technical Reports Server (NTRS)
Gatuzz, Efrain; Garcia, Javier; Kallman, Timothy R.; Mendoza, Claudio
2016-01-01
We present a detailed study of X-ray absorption in the local interstellar medium by analyzing the X-ray spectra of 24 galactic sources obtained with the Chandra High Energy Transmission Grating Spectrometer and the XMM-Newton Reflection Grating Spectrometer. Methods. By modeling the continuum with a simple broken power-law and by implementing the new ISMabs X-ray absorption model, we have estimated the total H, O, Ne, and Fe column densities towards the observed sources. Results. We have determined the absorbing material distribution as a function of source distance and galactic latitude longitude. Conclusions. Direct estimates of the fractions of neutrally, singly, and doubly ionized species of O, Ne, and Fe reveal the dominance of the cold component, thus indicating an overall low degree of ionization. Our results are expected to be sensitive to the model used to describe the continuum in all sources.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S
2018-06-01
Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.
2018-06-01
Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children
ERIC Educational Resources Information Center
Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin
2012-01-01
Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…
Local surface curvature analysis based on reflection estimation
NASA Astrophysics Data System (ADS)
Lu, Qinglin; Laligant, Olivier; Fauvet, Eric; Zakharova, Anastasia
2015-07-01
In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.
Choi, Young-Chul; Park, Jin-Ho; Choi, Kyoung-Sik
2011-01-01
In a nuclear power plant, a loose part monitoring system (LPMS) provides information on the location and the mass of a loosened or detached metal impacted onto the inner surface of the primary pressure boundary. Typically, accelerometers are mounted on the surface of a reactor vessel to localize the impact location caused by the impact of metallic substances on the reactor system. However, in some cases, the number of accelerometers is not sufficient to estimate the impact location precisely. In such a case, one of useful methods is to utilize other types of sensor that can measure the vibration of the reactor structure. For example, acoustic emission (AE) sensors are installed on the reactor structure to detect leakage or cracks on the primary pressure boundary. However, accelerometers and AE sensors have a different frequency range. The frequency of interest of AE sensors is higher than that of accelerometers. In this paper, we propose a method of impact source localization by using both accelerometer signals and AE signals, simultaneously. The main concept of impact location estimation is based on the arrival time difference of the impact stress wave between different sensor locations. However, it is difficult to find the arrival time difference between sensors, because the primary frequency ranges of accelerometers and AE sensors are different. To overcome the problem, we used phase delays of an envelope of impact signals. This is because the impact signals from the accelerometer and the AE sensor are similar in the whole shape (envelope). To verify the proposed method, we have performed experiments for a reactor mock-up model and a real nuclear power plant. The experimental results demonstrate that we can enhance the reliability and precision of the impact source localization. Therefore, if the proposed method is applied to a nuclear power plant, we can obtain the effect of additional installed sensors. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
A comprehensive dose assessment of irradiated hand by iridium-192 source in industrial radiography.
Hosseini Pooya, S M; Dashtipour, M R; Paydar, R; Mianji, F; Pourshahab, B
2017-09-01
Among the various incidents in industrial radiography, inadvertent handling of sources by hands is one of the most frequent incidents in which some parts of the hands may be locally exposed to high doses. An accurate assessment of extremity dose assists medical doctors in selecting appropriate treatments, preventing the injury expansion in the region. In this study, a phantom was designed to simulate a fisted hand of a radiographer when the worker holds a radioactive source in their hands. The local doses were measured using implanted TLDs in the phantom at different distances from a source. Furthermore, skin dose distribution was measured by Gaf-chromic films in the palm region of the phantom. The reliability of the measurements has been studied via analytical as well as Monte-Carlo simulation methods. The results showed that the new phantom design can be used reliably in extremity dose assessments, particularly at the points next to the source.
High dose rate brachytherapy source measurement intercomparison.
Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette
2017-06-01
This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR 192 Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single 192 Ir source using their own equipment and local protocols. Results were compared to the 192 Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for 192 Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.
Method and apparatus for imparting strength to a material using sliding loads
Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.
1999-01-01
A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.
Method And Apparatus For Imparting Strength To Materials Using Sliding Loads
Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.
1999-03-16
A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.
NASA Astrophysics Data System (ADS)
Chen, L. A.; Doddridge, B. G.; Dickerson, R. R.
2001-12-01
As the primary field experiment for Maryland Aerosol Research and CHaracterization (MARCH-Atlantic) study, chemically speciated PM2.5 has been sampled at Fort Meade (FME, 39.10° N 76.74° W) since July 1999. FME is suburban, located in the middle of the bustling Baltimore-Washington corridor, which is generally downwind of the highly industrialized Midwest. Due to this unique sampling location, the PM2.5 observed at FME is expected to be of both local and regional sources, with relative contributions varying temporally. This variation, believed to be largely controlled by the meteorology, influences day-to-day or seasonal profiles of PM2.5 mass concentration and chemical composition. Air parcel back trajectories, which describe the path of air parcels traveling backward in time from site (receptor), reflect changes in the synoptic meteorological conditions. In this paper, an ensemble back trajectory method is employed to study the meteorology associated with each high/low PM2.5 episode in different seasons. For every sampling day, the residence time of air parcels within the eastern US at a 1° x 1° x 500 m geographic resolution can be estimated in order to resolve areas likely dominating the production of various PM2.5 components. Local sources are found to be more dominant in winter than in summer. "Factor analysis" is based on mass balance approach, providing useful insights on air pollution data. Here, a newly developed factor analysis model (UNMIX) is used to extract source profiles and contributions from the speciated PM2.5 data. Combing the model results with ensemble back trajectory method improves the understanding of the source regions and helps partition the contributions from local or more distant areas. >http://www.meto.umd.edu/~bruce/MARCH-Atl.html
Source apportionment of ambient PM10 and PM2.5 in Haikou, China
NASA Astrophysics Data System (ADS)
Fang, Xiaozhen; Bi, Xiaohui; Xu, Hong; Wu, Jianhui; Zhang, Yufen; Feng, Yinchang
2017-07-01
In order to identify the sources of PM10 and PM2.5 in Haikou, 60 ambient air samples were collected in winter and spring, respectively. Fifteen elements (Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn and Pb), water-soluble ions (SO42 - and NO3-), and organic carbon (OC) and elemental carbon (EC) were analyzed. It was clear that the concentration of particulate matter was higher in winter than in spring. The value of PM2.5/PM10 was > 0.6. Moreover, the proportions of TC, ions, Na, Al, Si and Ca were more high in PM10 and PM2.5. The SOC concentration was estimated by the minimum OC/EC ratio method, and deducted from particulate matter compositions when running CMB model. According to the results of CMB model, the resuspended dust (17.5-35.0%), vehicle exhaust (14.9-23.6%) and secondary particulates (20.4-28.8%) were the major source categories of ambient particulate matter. Additionally, sea salt also had partial contribution (3-8%). And back trajectory analysis results showed that particulate matter was greatly affected by regional sources in winter, while less affected in spring. So particulate matter was not only affected by local sources, but also affected by sea salt and regional sources in coastal cities. Further research could focuses on establishing the actual secondary particles profiles and identifying the local and regional sources of PM at once by one model or analysis method.
Breaking the acoustic diffraction barrier with localization optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Razansky, Daniel
2018-02-01
Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
Dimitriou, Konstantinos; Kassomenos, Pavlos
2015-01-01
Three years of hourly O3 concentration measurements from a metropolitan and a medium scale urban area in Greece: Athens and Ioannina respectively, were analyzed in conjunction with hourly wind speed/direction data and air mass trajectories, aiming to reveal local and regional contributions respectively. Conditional Probability Function was used to indicate associations among distinct wind directions and extreme O3 episodes. Backward trajectory clusters were elaborated by Potential Source Contribution Function on a grid of a 0.5°×0.5° resolution, in order to localize potential exogenous sources of O3 and its precursors. In Athens, an increased likelihood of extreme O3 events at the Northern suburbs was associated with the influence of SSW-SW sea breeze from Saronikos Gulf, due to O3 transportation from the city center. In Ioannina, the impacts of O3 conveyance from the city center to the suburban monitoring site were weaker. Potential O3 transboundary sources for Athens were mainly localized over Balkan Peninsula, Greece and the Aegean Sea. Potential Source Contribution Function hotspots were isolated over the industrialized area of Ptolemaida basin and above the region of Thessaloniki. Potential regional O3 sources for Ioannina were indicated across northern Greece and Balkan Peninsula, whereas peak Potential Source Contribution Function values were particularly observed over the urban area of Sofia in Bulgaria. The implemented methods, revealed local and potential transboundary source areas of O3, influencing Athens and Ioannina. Differences among the two cities were highlighted and the role of topography was emerged. These findings can be used in order to reduce the emission of O3 precursors. Copyright © 2014 Elsevier B.V. All rights reserved.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Mobile indoor localization using Kalman filter and trilateration technique
NASA Astrophysics Data System (ADS)
Wahid, Abdul; Kim, Su Mi; Choi, Jaeho
2015-12-01
In this paper, an indoor localization method based on Kalman filtered RSSI is presented. The indoor communications environment however is rather harsh to the mobiles since there is a substantial number of objects distorting the RSSI signals; fading and interference are main sources of the distortion. In this paper, a Kalman filter is adopted to filter the RSSI signals and the trilateration method is applied to obtain the robust and accurate coordinates of the mobile station. From the indoor experiments using the WiFi stations, we have found that the proposed algorithm can provide a higher accuracy with relatively lower power consumption in comparison to a conventional method.
Lo, Kam W; Ferguson, Brian G
2012-11-01
The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.
NASA Astrophysics Data System (ADS)
Verlinden, Christopher M.
Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; ...
2017-09-13
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
NASA Astrophysics Data System (ADS)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; Suard, Maxime; Lenschow, Donald H.; Sweeney, Colm; Herndon, Scott; Schwietzke, Stefan; Pétron, Gabrielle; Pifer, Justin; Kort, Eric A.; Schnell, Russell
2017-09-01
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be made and apply it to methane, ethane, and carbon dioxide on spatial scales of ˜ 1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance
methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h-1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
NASA Astrophysics Data System (ADS)
Zhang, Yubo; Deng, Muhan; Yang, Rui; Jin, Feixiang
2017-09-01
The location technique of acoustic emission (AE) source for deformation damage of 16Mn steel in high temperature environment is studied by using linear time-difference-of-arrival (TDOA) location method. The distribution characteristics of strain induced acoustic emission source signals at 20°C and 400°C of tensile specimens were investigated. It is found that the near fault has the location signal of the cluster, which can judge the stress concentration and cause the fracture.
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote sources of water for precipitation, based on the implementation of passive constituent tracers of water vapor (termed water vapor tracers, WVT) in a general circulation model. In this case, the major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In this approach, each WVT is associated with an evaporative source region, and tracks the water until it precipitates from the atmosphere. By assuming that the regional water is well mixed with water from other sources, the physical processes that act on the WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be computed within the model simulation, and can be validated against the model's prognostic water vapor. Furthermore, estimates of precipitation recycling can be compared with bulk diagnostic approaches. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional tracers, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic 2 regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In general, most North American land regions showed a positive correlation between evaporation and recycling ratio (except the Southeast United States) and negative correlations of recycling ratio with precipitation and moisture transport (except the Southwestern United States). The Midwestern local source is positively correlated with local evaporation, but it is not correlated with water vapor transport. This is contrary to bulk diagnostic estimates of precipitation recycling. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
A new method of tree xylem water extraction for isotopic analysis
NASA Astrophysics Data System (ADS)
Gierke, C.; Newton, B. T.
2011-12-01
The Sacramento Mountain Watershed Study in the southern Sacramento Mountains of New Mexico is designed to assess the forest restoration technique of tree thinning in mountain watersheds as an effective method of increasing local and regional groundwater recharge. The project is using a soil water balance approach to quantify the partitioning of local precipitation within this watershed before and after thinning trees. Understanding what sources trees extract their water from (e.g. shallow groundwater, unsaturated fractured bedrock, and soils) is difficult due to a complex hydrologic system and heterogeneous distribution of soil thicknesses. However, in order to accurately quantify the soil water balance and to assess how thinning trees will affect this water balance, it is important determine the sources from which trees extract their water. We plan to use oxygen and hydrogen stable isotopic analysis of various end member waters to identify these different sources. We are in the process of developing a new method of determining the isotopic composition of tree water that has several advantages over conventional methods. Within the tree there is the xylem which transports water from the roots to the leaves and the phloem which transports starches and sugars in a water media throughout the tree. Previous studies have shown that the isotopic composition of xylem water accurately reflects that of source water, while phloem water has undergone isotopic fractionation during photosynthesis and metabolism. The distillation of water from twigs, which is often used to extract tree water for isotopic analysis, is very labor intensive. Other disadvantages to distillation methods include possible fractionation due to phase changes and the possible extraction of fractionated phloem waters. Employing a new mixing method, the composition of the twig water (TW) can be determined by putting twigs of unknown isotopic water composition into waters of known compositions or initial waters (IW), allowing diffusive processes to proceed to equilibrium, measuring the composition of the resulting mixture or final water (FW) then, solving a simple mixing equation. To evaluate this method, we collected several twig samples from Douglas Firs in the Sacramento Mountains. Twig water was prepared for isotopic analysis both by cryogenic distillation and the mixing method. Soil in close proximity to these trees was also sampled and water was extracted by cryogenic distillation. Preliminary results show that the isotopic composition of distilled twig water and soil waters plot to the right of the local meteoric water line (LMWL) suggesting that trees are extracting shallow evaporated soil water. Twig water obtained from the mixing method plot near the LMWL within the range expected for local snow melt, suggesting a possibly deeper non-evaporated source. In general, distillation values are approximately 4% heavier with respect to delta 18O than waters obtained from the mixing method. It is possible that this difference is due to the contribution of the fractionated water of the twig phloem that is released during the distillation process. This difference is quite significant and can lead to very different interpretations. These results are being addressed with additional experiments.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Significant Impact Levels (SILs) 18 AAC 50.220. Enforceable Test Methods (effective 10/01/2004) 18 AAC 50.225.../76) Rule 104 Reporting of Source Test Data and Analyses (Adopted 01/9/76) Rule 108 Alternative...) Rule 47 Source Test, Emission Monitor, and Call-Back Fees (Adopted 06/22/99) Rule 50 Opacity (Adopted...
NASA Astrophysics Data System (ADS)
Sutherland, Michael Stephen
2010-12-01
The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.
Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert
2016-07-01
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.
NASA Technical Reports Server (NTRS)
Hisamoto, Chuck (Inventor); Arzoumanian, Zaven (Inventor); Sheikh, Suneel I. (Inventor)
2015-01-01
A method and system for spacecraft navigation using distant celestial gamma-ray bursts which offer detectable, bright, high-energy events that provide well-defined characteristics conducive to accurate time-alignment among spatially separated spacecraft. Utilizing assemblages of photons from distant gamma-ray bursts, relative range between two spacecraft can be accurately computed along the direction to each burst's source based upon the difference in arrival time of the burst emission at each spacecraft's location. Correlation methods used to time-align the high-energy burst profiles are provided. The spacecraft navigation may be carried out autonomously or in a central control mode of operation.
10 CFR 434.603 - Determination of the design energy use.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Design using the same methods, assumptions, climate data, and simulation tool as were used to establish... reliable local source. During rapid changes in fuel prices it is recommended that an average fuel price for...
10 CFR 434.603 - Determination of the design energy use.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Design using the same methods, assumptions, climate data, and simulation tool as were used to establish... reliable local source. During rapid changes in fuel prices it is recommended that an average fuel price for...
10 CFR 434.603 - Determination of the design energy use.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Design using the same methods, assumptions, climate data, and simulation tool as were used to establish... reliable local source. During rapid changes in fuel prices it is recommended that an average fuel price for...
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity
Lee, Hui Sun; Im, Wonpil
2013-01-01
Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286
Princich, Juan Pablo; Wassermann, Demian; Latini, Facundo; Oddo, Silvia; Blenkmann, Alejandro Omar; Seifer, Gustavo; Kochen, Silvia
2013-01-01
Depth intracranial electrodes (IEs) placement is one of the most used procedures to identify the epileptogenic zone (EZ) in surgical treatment of drug resistant epilepsy patients, about 20–30% of this population. IEs localization is therefore a critical issue defining the EZ and its relation with eloquent functional areas. That information is then used to target the resective surgery and has great potential to affect outcome. We designed a methodological procedure intended to avoid the need for highly specialized medical resources and reduce time to identify the anatomical location of IEs, during the first instances of intracranial EEG recordings. This workflow is based on established open source software; 3D Slicer and Freesurfer that uses MRI and Post-implant CT fusion for the localization of IEs and its relation with automatic labeled surrounding cortex. To test this hypothesis we assessed the time elapsed between the surgical implantation process and the final anatomical localization of IEs by means of our proposed method compared against traditional visual analysis of raw post-implant imaging in two groups of patients. All IEs were identified in the first 24 H (6–24 H) of implantation using our method in 4 patients of the first group. For the control group; all IEs were identified by experts with an overall time range of 36 h to 3 days using traditional visual analysis. It included (7 patients), 3 patients implanted with IEs and the same 4 patients from the first group. Time to localization was restrained in this group by the specialized personnel and the image quality available. To validate our method; we trained two inexperienced operators to assess the position of IEs contacts on four patients (5 IEs) using the proposed method. We quantified the discrepancies between operators and we also assessed the efficiency of our method to define the EZ comparing the findings against the results of traditional analysis. PMID:24427112
Reducing Sensor Noise in MEG and EEG Recordings Using Oversampled Temporal Projection.
Larson, Eric; Taulu, Samu
2018-05-01
Here, we review the theory of suppression of spatially uncorrelated, sensor-specific noise in electro- and magentoencephalography (EEG and MEG) arrays, and introduce a novel method for suppression. Our method requires only that the signals of interest are spatially oversampled, which is a reasonable assumption for many EEG and MEG systems. Our method is based on a leave-one-out procedure using overlapping temporal windows in a mathematical framework to project spatially uncorrelated noise in the temporal domain. This method, termed "oversampled temporal projection" (OTP), has four advantages over existing methods. First, sparse channel-specific artifacts are suppressed while limiting mixing with other channels, whereas existing linear, time-invariant spatial operators can spread such artifacts to other channels with a spatial distribution which can be mistaken for one produced by an electrophysiological source. Second, OTP minimizes distortion of the spatial configuration of the data. During source localization (e.g., dipole fitting), many spatial methods require corresponding modification of the forward model to avoid bias, while OTP does not. Third, noise suppression factors at the sensor level are maintained during source localization, whereas bias compensation removes the denoising benefit for spatial methods that require such compensation. Fourth, OTP uses a time-window duration parameter to control the tradeoff between noise suppression and adaptation to time-varying sensor characteristics. OTP efficiently optimizes noise suppression performance while controlling for spatial bias of the signal of interest. This is important in applications where sensor noise significantly limits the signal-to-noise ratio, such as high-frequency brain oscillations.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Juric, Matjaz B.
2018-01-01
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352
Geometric k-nearest neighbor estimation of entropy and mutual information
NASA Astrophysics Data System (ADS)
Lord, Warren M.; Sun, Jie; Bollt, Erik M.
2018-03-01
Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.
Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum
NASA Astrophysics Data System (ADS)
Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-11-01
The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
[Magnetoencephalography in the presurgical evaluation of patients with drug-resistant epilepsy].
Koptelova, A M; Arkhipova, N A; Golovteev, A L; Chadaev, V A; Grinenko, O A; Kozlova, A B; Novikova, S I; Stepanenko, A Iu; Melikian, A G; Stroganova, T A
2013-01-01
Magnetoencephalography (MEG) in combination with structural MRI (magnetic source imaging, MSI) plays an increasingly important role as one of the tools for presurgical evaluation of medically intractable focal epilepsy. The aim of the study was to compare the MSI and commonly used video EEG monitoring method (vEEG) in their sensitivity to interictal epileptic discharges (IED) in 22 patients with drug resistant epilepsy. Furthermore, the detection and localization results obtained by both methods were verified using the data of electrocorticography (ECoG) and postsurgical outcome in 13 patients who underwent invasive EEG monitoring and surgery. The results showed that MSI was superior to vEEC in terms of sensitivity to IED with difference in sensitivity of 22%. The data also suggested that MSI superiority to vEEG in detecting epileptic discharges might, at least partly, arise from better MEG responsiveness to epileptic events coming from the medial, opercular and basal aspects of cortical lobes. MSI localization estimates were in the same cortical lobe and at the same lobar aspects as the epileptic foci detected by ECoG in all patients. Thus, magnetic source imaging can provide critical localization information that is not available when other noninvasive methods, such as vEEG and MRI, are used.
Classification of event location using matched filters via on-floor accelerometers
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.
2017-04-01
Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.
Chen, Sheng-Po; Wang, Chieh-Heng; Lin, Wen-Dian; Tong, Yu-Huei; Chen, Yu-Chun; Chiu, Ching-Jui; Chiang, Hung-Chi; Fan, Chen-Lun; Wang, Jia-Lin; Chang, Julius S
2018-05-01
The present study combines high-resolution measurements at various distances from a world-class gigantic petrochemical complex with model simulations to test a method to assess industrial emissions and their effect on local air quality. Due to the complexity in wind conditions which were highly seasonal, the dominant wind flow patterns in the coastal region of interest were classified into three types, namely northeast monsoonal (NEM) flows, southwest monsoonal (SEM) flows and local circulation (LC) based on six years of monitoring data. Sulfur dioxide (SO 2 ) was chosen as an indicative pollutant for prominent industrial emissions. A high-density monitoring network of 12 air-quality stations distributed within a 20-km radius surrounding the petrochemical complex provided hourly measurements of SO 2 and wind parameters. The SO 2 emissions from major industrial sources registered by the monitoring network were then used to validate model simulations and to illustrate the transport of the SO 2 plumes under the three typical wind patterns. It was found that the coupling of observations and modeling was able to successfully explain the transport of the industrial plumes. Although the petrochemical complex was seemingly the only major source to affect local air quality, multiple prominent sources from afar also played a significant role in local air quality. As a result, we found that a more complete and balanced assessment of the local air quality can be achieved only after taking into account the wind characteristics and emission factors of a much larger spatial scale than the initial (20 km by 20 km) study domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nisyawati, Aini, R. N.; Silalahi, M.; Purba, E. C.; Avifah, N.
2017-07-01
Research on the local knowledge of food plants used by Karo ethnic in the Semangat Gunung Village, North Sumatra has been done. The aim of this study is to reveal plant species that used by the people of Karo ethnic as food. We used the ethnobotanical approach which included open-ended, semi-structural interview, and exploration method. One eldervillage, 2 traditional healers, and 30 respondents have been selected as sources of information. Descriptive statistics have been used to analyze the gathered data. A number of 109 species which belong to 83 genus and 45 families known to be used as food sources by Karo people. Four families have the highest number of food plant species, which are Solanaceae (8 species), Poaceae (7 species), Fabaceae (6 species), and Zingiberaceae (6 species). All of those families are found in the village, both wild and Cultivated. Solanaceae is used as source of fruits, vegetables, and spices. Poaceae is used as the source of the staple food, alternative food sources, snacks, spices, and traditional foods. Fabaceae is used as source of vegetables and traditional foods. Zingiberaceae is used as source of spices.
Senar, J.C.; Conroy, M.J.; Carrascal, L.M.; Domenech, J.; Mozetich, I.; Uribe, F.
1999-01-01
Heterogeneous capture probabilities are a common problem in many capture-recapture studies. Several methods of detecting the presence of such heterogeneity are currently available, and stratification of data has been suggested as the standard method to avoid its effects. However, few studies have tried to identify sources of heterogeneity, or whether there are interactions among sources. The aim of this paper is to suggest an analytical procedure to identify sources of capture heterogeneity. We use data on the sex and age of Great Tits captured in baited funnel traps, at two localities differing in average temperature. We additionally use 'recapture' data obtained by videotaping at feeder (with no associated trap), where the tits ringed with different colours were recorded. This allowed us to test whether individuals in different classes (age, sex and condition) are not trapped because of trap shyness or because o a reduced use of the bait. We used logistic regression analysis of the capture probabilities to test for the effects of age, sex, condition, location and 'recapture method. The results showed a higher recapture probability in the colder locality. Yearling birds (either males or females) had the highest recapture prob abilities, followed by adult males, while adult females had the lowest recapture probabilities. There was no effect of the method of 'recapture' (trap or video tape), which suggests that adult females are less often captured in traps no because of trap-shyness but because of less dependence on supplementary food. The potential use of this methodological approach in other studies is discussed.
Optically coupled methods for microwave impedance microscopy
NASA Astrophysics Data System (ADS)
Johnston, Scott R.; Ma, Eric Yue; Shen, Zhi-Xun
2018-04-01
Scanning Microwave Impedance Microscopy (MIM) measurement of photoconductivity with 50 nm resolution is demonstrated using a modulated optical source. The use of a modulated source allows for the measurement of photoconductivity in a single scan without a reference region on the sample, as well as removing most topographical artifacts and enhancing signal to noise as compared with unmodulated measurement. A broadband light source with a tunable monochrometer is then used to measure energy resolved photoconductivity with the same methodology. Finally, a pulsed optical source is used to measure local photo-carrier lifetimes via MIM, using the same 50 nm resolution tip.
Acoustic Source Localization in Aircraft Interiors Using Microphone Array Technologies
NASA Technical Reports Server (NTRS)
Sklanka, Bernard J.; Tuss, Joel R.; Buehrle, Ralph D.; Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas
2006-01-01
Using three microphone array configurations at two aircraft body stations on a Boeing 777-300ER flight test, the acoustic radiation characteristics of the sidewall and outboard floor system are investigated by experimental measurement. Analysis of the experimental data is performed using sound intensity calculations for closely spaced microphones, PATCH Inverse Boundary Element Nearfield Acoustic Holography, and Spherical Nearfield Acoustic Holography. Each method is compared assessing strengths and weaknesses, evaluating source identification capability for both broadband and narrowband sources, evaluating sources during transient and steady-state conditions, and quantifying field reconstruction continuity using multiple array positions.
[The underwater and airborne horizontal localization of sound by the northern fur seal].
Babushina, E S; Poliakov, M A
2004-01-01
The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
NASA Astrophysics Data System (ADS)
Chandra, Rohit; Balasingham, Ilangko
2015-05-01
Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.
The presentation will describe measurement and modeling activities to study the dispersion of air pollution from transit emissions (highway, rail, port) and evaluation of barriers as a mitigation method.
Seismo-volcano source localization with triaxial broad-band seismic array
NASA Astrophysics Data System (ADS)
Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.
2011-10-01
Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.
Method and apparatus for imparting strength to a material using sliding loads
Hughes, D.A.; Dawson, D.B.; Korellis, J.S.
1999-03-16
A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads is disclosed. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: (1) asperity interactions and (2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example. 11 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Significant Impact Levels (SILs) 18 AAC 50.220. Enforceable Test Methods (effective 10/01/2004) 18 AAC 50.225... (Adopted 01/9/76) Rule 104 Reporting of Source Test Data and Analyses (Adopted 01/9/76) Rule 108....2Asbestos Removal Fees (Adopted 08/04/92) Rule 47Source Test, Emission Monitor, and Call-Back Fees (Adopted...
Liang, Shidong; Jia, Haifeng; Yang, Cong; Melching, Charles; Yuan, Yongping
2015-11-15
An environmental capacity management (ECM) system was developed to help practically implement a Total Maximum Daily Load (TMDL) for a key bay in a highly eutrophic lake in China. The ECM system consists of a simulation platform for pollutant load calculation and a pollutant load hierarchical allocation (PLHA) system. The simulation platform was developed by linking the Environmental Fluid Dynamics Code (EFDC) and Water Quality Analysis Simulation Program (WASP). In the PLHA, pollutant loads were allocated top-down in several levels based on characteristics of the pollutant sources. Different allocation methods could be used for the different levels with the advantages of each method combined over the entire allocation. Zhushan Bay of Taihu Lake, one of the most eutrophic lakes in China, was selected as a case study. The allowable loads of total nitrogen, total phosphorus, ammonia, and chemical oxygen demand were found to be 2122.2, 94.9, 1230.4, and 5260.0 t·yr(-1), respectively. The PLHA for the case study consists of 5 levels. At level 0, loads are allocated to those from the lakeshore direct drainage, atmospheric deposition, internal release, and tributary inflows. At level 1 the loads allocated to tributary inflows are allocated to the 3 tributaries. At level 2, the loads allocated to one inflow tributary are allocated to upstream areas and local sources along the tributary. At level 3, the loads allocated to local sources are allocated to the point and non-point sources from different towns. At level 4, the loads allocated to non-point sources in each town are allocated to different villages. Compared with traditional forms of pollutant load allocation methods, PLHA can combine the advantages of different methods which put different priority weights on equity and efficiency, and the PLHA is easy to understand for stakeholders and more flexible to adjust when applied in practical cases. Copyright © 2015 Elsevier B.V. All rights reserved.
Chowdhury, Rasheda Arman; Zerouali, Younes; Hedrich, Tanguy; Heers, Marcel; Kobayashi, Eliane; Lina, Jean-Marc; Grova, Christophe
2015-11-01
The purpose of this study is to develop and quantitatively assess whether fusion of EEG and MEG (MEEG) data within the maximum entropy on the mean (MEM) framework increases the spatial accuracy of source localization, by yielding better recovery of the spatial extent and propagation pathway of the underlying generators of inter-ictal epileptic discharges (IEDs). The key element in this study is the integration of the complementary information from EEG and MEG data within the MEM framework. MEEG was compared with EEG and MEG when localizing single transient IEDs. The fusion approach was evaluated using realistic simulation models involving one or two spatially extended sources mimicking propagation patterns of IEDs. We also assessed the impact of the number of EEG electrodes required for an efficient EEG-MEG fusion. MEM was compared with minimum norm estimate, dynamic statistical parametric mapping, and standardized low-resolution electromagnetic tomography. The fusion approach was finally assessed on real epileptic data recorded from two patients showing IEDs simultaneously in EEG and MEG. Overall the localization of MEEG data using MEM provided better recovery of the source spatial extent, more sensitivity to the source depth and more accurate detection of the onset and propagation of IEDs than EEG or MEG alone. MEM was more accurate than the other methods. MEEG proved more robust than EEG and MEG for single IED localization in low signal-to-noise ratio conditions. We also showed that only few EEG electrodes are required to bring additional relevant information to MEG during MEM fusion.
Fast neutron counting in a mobile, trailer-based search platform
NASA Astrophysics Data System (ADS)
Hayward, Jason P.; Sparger, John; Fabris, Lorenzo; Newby, Robert J.
2017-12-01
Trailer-based search platforms for detection of radiological and nuclear threats are often based upon coded aperture gamma-ray imaging, because this method can be rendered insensitive to local variations in gamma background while still localizing the source well. Since gamma source emissions are rather easily shielded, in this work we consider the addition of fast neutron counting to a mobile platform for detection of sources containing Pu. A proof-of-concept system capable of combined gamma and neutron coded-aperture imaging was built inside of a trailer and used to detect a 252Cf source while driving along a roadway. Neutron detector types employed included EJ-309 in a detector plane and EJ-299-33 in a front mask plane. While the 252Cf gamma emissions were not readily detectable while driving by at 16.9 m standoff, the neutron emissions can be detected while moving. Mobile detection performance for this system and a scaled-up system design are presented, along with implications for threat sensing.
A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.
Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan
2016-07-01
Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.
Wavelet filter analysis of local atmospheric pressure effects in the long-period tidal bands
NASA Astrophysics Data System (ADS)
Hu, X.-G.; Liu, L. T.; Ducarme, B.; Hsu, H. T.; Sun, H.-P.
2006-11-01
It is well known that local atmospheric pressure variations obviously affect the observation of short-period Earth tides, such as diurnal tides, semi-diurnal tides and ter-diurnal tides, but local atmospheric pressure effects on the long-period Earth tides have not been studied in detail. This is because the local atmospheric pressure is believed not to be sufficient for an effective pressure correction in long-period tidal bands, and there are no efficient methods to investigate local atmospheric effects in these bands. The usual tidal analysis software package, such as ETERNA, Baytap-G and VAV, cannot provide detailed pressure admittances for long-period tidal bands. We propose a wavelet method to investigate local atmospheric effects on gravity variations in long-period tidal bands. This method constructs efficient orthogonal filter bank with Daubechies wavelets of high vanishing moments. The main advantage of the wavelet filter bank is that it has excellent low frequency response and efficiently suppresses instrumental drift of superconducting gravimeters (SGs) without using any mathematical model. Applying the wavelet method to the 13-year continuous gravity observations from SG T003 in Brussels, Belgium, we filtered 12 long-period tidal groups into eight narrow frequency bands. Wavelet method demonstrates that local atmospheric pressure fluctuations are highly correlated with the noise of SG measurements in the period band 4-40 days with correlation coefficients higher than 0.95 and local atmospheric pressure variations are the main error source for the determination of the tidal parameters in these bands. We show the significant improvement of long-period tidal parameters provided by wavelet method in term of precision.
A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea
Lee, Norman; Elias, Damian O.; Mason, Andrew C.
2009-01-01
Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794
NASA Astrophysics Data System (ADS)
Park, Dubok; Han, David K.; Ko, Hanseok
2017-05-01
Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Local sources of black walnut recommended for planting in Maryland
Silas Little; Calvin F. Bey; Daniel McConaughy
1974-01-01
After 5 years, local black walnut seedlings were taller than those of 12 out-of-state sources in a Maryland planting. Seedlings from south-of-local sources out grew trees from northern sources. Genetic influence on height was expressed early--with little change in ranking of sources after the third year.
Transported vs. local contributions from secondary and biomass burning sources to PM2.5
NASA Astrophysics Data System (ADS)
Kim, Bong Mann; Seo, Jihoon; Kim, Jin Young; Lee, Ji Yi; Kim, Yumi
2016-11-01
The concentration of fine particulates in Seoul, Korea has been lowered over the past 10 years, as a result of the city's efforts in implementing environmental control measures. Yet, the particulate concentration level in Seoul remains high as compared to other urban areas globally. In order to further improve fine particulate air quality in the Korea region and design a more effective control strategy, enhanced understanding of the sources and contribution of fine particulates along with their chemical compositions is necessary. In turn, relative contributions from local and transported sources on Seoul need to be established, as this city is particularly influenced by sources from upwind geographic areas. In this study, PM2.5 monitoring was conducted in Seoul from October 2012 to September 2013. PM2.5 mass concentrations, ions, metals, organic carbon (OC), elemental carbon (EC), water soluble OC (WSOC), humic-like substances of carbon (HULIS-C), and 85 organic compounds were chemically analyzed. The multivariate receptor model SMP was applied to the PM2.5 data, which then identified nine sources and estimated their source compositions as well as source contributions. Prior studies have identified and quantified the transported and local sources. However, no prior studies have distinguished contributions of an individual source between transported contribution and locally produced contribution. We differentiated transported secondary and biomass burning sources from the locally produced secondary and biomass burning sources, which was supported with potential source contribution function (PSCF) analysis. Of the total secondary source contribution, 32% was attributed to transported secondary sources, and 68% was attributed to locally formed secondary sources. Meanwhile, the contribution from the transported biomass burning source was revealed as 59% of the total biomass burning contribution, which was 1.5 times higher than that of the local biomass burning source. Four-season average source contributions from the transported and the local sources were 28% and 72%, respectively.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
[Fluoride in drinking water in Cuba and its association with geological and geographical variables].
Luna, Liliam Cuéllar; Melián, Maricel García
2003-11-01
To determine the association between different concentrations of the fluoride ion in drinking water and some geological and geographical variables in Cuba, by using a geographic information system. From November 1998 to October 1999 we studied the fluoride concentration in the sources of drinking water for 753 Cuban localities that had at least 1 000 inhabitants. For the information analysis we utilized the MapInfo Professional version 5.5 geographic information system, using the overlaying method. The study variables were the concentration of the fluoride ion in the water sources, the geological characteristics of the area, the alignments (geological characteristics that were found together), the types of water sources, and whether an area was a plain or mountainous. The results were grouped by locality and municipality. In 83.1% of the localities, the water samples were collected from wells and springs, and the remaining 16.9% came from dams and rivers. Of the 753 localities studied, 675 of them (89.6%) had low or medium fluoride concentrations (under 0.7 mg/L). The eastern region of the country was the one most affected by high fluoride concentrations in the waters, followed by the central region of the country. The majority of the localities with high natural fluoride concentrations were in areas located on Cretaceous volcanic arc rocks. The presence of fluoride in the drinking waters was related to the alignments with the earth's crust, in rock complexes of volcanic-sedimentary origin and of intrusive origin and also in carbonate rocks. However, the highest fluoride concentrations generally coincided with rock complexes of volcanic-sedimentary origin and of intrusive origin. All the localities with high fluoride concentrations in the water were associated with wells. The fluoride concentration is low or medium in the drinking water sources for 89.6% of the Cuban localities with at least 1 000 inhabitants. Geological and geographical characteristics can help identify areas with optimal or high concentrations of the fluoride ion in the drinking water.
Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y
1997-09-01
This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
Liu, P.; Archuleta, R.J.; Hartzell, S.H.
2006-01-01
We present a new method for calculating broadband time histories of ground motion based on a hybrid low-frequency/high-frequency approach with correlated source parameters. Using a finite-difference method we calculate low- frequency synthetics (< ∼1 Hz) in a 3D velocity structure. We also compute broadband synthetics in a 1D velocity model using a frequency-wavenumber method. The low frequencies from the 3D calculation are combined with the high frequencies from the 1D calculation by using matched filtering at a crossover frequency of 1 Hz. The source description, common to both the 1D and 3D synthetics, is based on correlated random distributions for the slip amplitude, rupture velocity, and rise time on the fault. This source description allows for the specification of source parameters independent of any a priori inversion results. In our broadband modeling we include correlation between slip amplitude, rupture velocity, and rise time, as suggested by dynamic fault modeling. The method of using correlated random source parameters is flexible and can be easily modified to adjust to our changing understanding of earthquake ruptures. A realistic attenuation model is common to both the 3D and 1D calculations that form the low- and high-frequency components of the broadband synthetics. The value of Q is a function of the local shear-wave velocity. To produce more accurate high-frequency amplitudes and durations, the 1D synthetics are corrected with a randomized, frequency-dependent radiation pattern. The 1D synthetics are further corrected for local site and nonlinear soil effects by using a 1D nonlinear propagation code and generic velocity structure appropriate for the site’s National Earthquake Hazards Reduction Program (NEHRP) site classification. The entire procedure is validated by comparison with the 1994 Northridge, California, strong ground motion data set. The bias and error found here for response spectral acceleration are similar to the best results that have been published by others for the Northridge rupture.
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
2017-01-01
Background Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic “big data” from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. Objective The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. Methods An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. Results The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Conclusions Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. PMID:28619700
Single camera photogrammetry system for EEG electrode identification and localization.
Baysal, Uğur; Sengül, Gökhan
2010-04-01
In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.
NASA Astrophysics Data System (ADS)
Pavlov, Volodymyr S.; Bezsmernyi, Yurii O.; Zlepko, Sergey M.; Bezsmertna, Halyna V.
2017-08-01
The given paper analyzes principles of interaction and analysis of the reflected optical radiation from biotissue in the process of assessment of regional hemodynamics state in patients with local hypertensive- ischemic pain syndrome of amputation stumps of lower extremities, applying the method of photoplethysmography. The purpose is the evaluation of Laser photoplethysmography (LPPG) diagnostic value in examination of patients with chronic ischemia of lower extremities. Photonic device is developed to determine the level of the peripheral blood circulation, which determines the basic parameters of peripheral blood circulation and saturation level. Device consists of two sensors: infrared sensor, which contains the infrared laser radiation source and photodetector, and red sensor, which contains the red radiation source and photodetector. LPPG method allows to determined pulsatility of blood flow in different areas of the foot and lower leg, the degree of compensation and conservation perspectives limb. Surgical treatment of local hypertensive -ischemic pain syndrome of amputation stumps of lower extremities by means of semiclosed fasciotomy in combination with revasculating osteotrepanation enabled to improve considerably regional hemodynamics in the tissues of the stump and decrease pain and hypostatic disorders.
Combination of surface and borehole seismic data for robust target-oriented imaging
NASA Astrophysics Data System (ADS)
Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees
2016-05-01
A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.
NASA Astrophysics Data System (ADS)
Xie, J.; Ni, S.; Chu, R.; Xia, Y.
2017-12-01
Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.
A Mobile Sensing Approach for Regional Surveillance of ...
This paper discusses plan-path and automated concepts for inspection to localize individual leaks and to quantify their source rates over regions with active oil and gas well pads and pipelines. This is a peer reviewed journal article submission that advances GMAP OTM 33 mobile measurement and fixed place fence line NGAM topics. This paper is focused on methods development and does not contain new or unpublished source emissions results regarding oil and gas.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
"Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado
NASA Astrophysics Data System (ADS)
DeMets, C. M.
2012-12-01
Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments for quantitatively determining a generalized answer, it is "currently impossible to state categorically whether or not local food systems emit fewer greenhouse gases than non-local food systems" (Edwards-Jones et al, 2008). Even so, numerous researchers have shown that "83 percent of emissions occur before food even leaves the farm gate" (Weber and Matthews, Garnett, cited in DeWeerdt, 2011); while this doesn't provide any information in terms of local vs. non-local, it is significant when viewed in light of the fact that local farmers tend to have much greater transparency and accountability in their agricultural practices. In other words, "a farmer who sells in the local food economy might be more likely to adopt or continue sustainable practices in order to meet…customer demand" (DeWeerdt, 2011), among other reasons such as environmental concern and desire to support the local economy (DeWeerdt, 2009). In identifying solutions to barriers to locally sourcing food, this research will enable restaurants to overcome these barriers and source their food locally, thereby supporting farmers and their ability to maintain sustainable practices.
Source Localization Using Wireless Sensor Networks
2006-06-01
performance of the hybrid SI/ML estimation method. A wireless sensor network is simulated in NS-2 to study the network throughput, delay and jitter...indicate that the wireless sensor network has low delay and can support fast information exchange needed in counter-sniper applications.
NASA Astrophysics Data System (ADS)
McKay, L. D.; Layton, A.; Gentry, R.
2004-12-01
A multi-disciplinary group of researchers at the University of Tennessee is developing and testing a series of microbial assay methods based on real-time PCR to detect fecal bacterial concentrations and host sources in water samples. Real-time PCR is an enumeration technique based on the unique and conserved nucleic acid sequences present in all organisms. The first research task was development of an assay (AllBac) to detect total amount of Bacteroides, which represents up to 30 percent of fecal mass. Subsequent assays were developed to detect Bacteroides from cattle (BoBac) and humans (HuBac) using 16sRNA genes based on DNA sequences in the national GenBank, as well as sequences from local fecal samples. The assays potentially have significant advantages over conventional bacterial source tracking methods because: 1. unlike traditional enumeration methods, they do not require bacterial cultivation; 2. there are no known non-fecal sources of Bacteroides; 3. the assays are quantitative with results for total concentration and for each species expressed in mg/l; and 4. they show little regional variation within host species, meaning that they do not require development of extensive local gene libraries. The AllBac and BoBac assays have been used in a study of fecal contamination in a small rural watershed (Stock Creek) near Knoxville, TN, and have proven useful in identification of areas where cattle represent a significant fecal input and in development of BMPs. It is expected that these types of assays (and future assays for birds, hogs, etc.) could have broad applications in monitoring fecal impacts from Animal Feeding Operations, as well as from wildlife and human sources.
Passive acoustic source localization using sources of opportunity.
Verlinden, Christopher M A; Sarkar, J; Hodgkiss, W S; Kuperman, W A; Sabra, K G
2015-07-01
The feasibility of using data derived replicas from ships of opportunity for implementing matched field processing is demonstrated. The Automatic Identification System (AIS) is used to provide the library coordinates for the replica library and a correlation based processing procedure is used to overcome the impediment that the replica library is constructed from sources with different spectra and will further be used to locate another source with its own unique spectral structure. The method is illustrated with simulation and then verified using acoustic data from a 2009 experiment for which AIS information was retrieved from the United States Coast Guard Navigation Center Nationwide AIS database.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG
Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert
2015-01-01
Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149
Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas
2017-06-15
Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. ©Armin Spreco, Olle Eriksson, Örjan Dahlström, Benjamin John Cowling, Toomas Timpka. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.06.2017.
45 CFR 2551.92 - What are project funding requirements?
Code of Federal Regulations, 2010 CFR
2010-10-01
... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... local funding sources during the first three years of operations; (ii) An economic downturn, the...
45 CFR 2552.92 - What are project funding requirements?
Code of Federal Regulations, 2010 CFR
2010-10-01
... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... the development of local funding sources during the first three years of operations; or (ii) An...
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio
2016-04-01
Objective. Medical intractable epilepsy is a common condition that affects 40% of epileptic patients that generally have to undergo resective surgery. Magnetoencephalography (MEG) has been increasingly used to identify the epileptogenic foci through equivalent current dipole (ECD) modeling, one of the most accepted methods to obtain an accurate localization of interictal epileptiform discharges (IEDs). Modeling requires that MEG signals are adequately preprocessed to reduce interferences, a task that has been greatly improved by the use of blind source separation (BSS) methods. MEG recordings are highly sensitive to metallic interferences originated inside the head by implanted intracranial electrodes, dental prosthesis, etc and also coming from external sources such as pacemakers or vagal stimulators. To reduce these artifacts, a BSS-based fully automatic procedure was recently developed and validated, showing an effective reduction of metallic artifacts in simulated and real signals (Migliorelli et al 2015 J. Neural Eng. 12 046001). The main objective of this study was to evaluate its effects in the detection of IEDs and ECD modeling of patients with focal epilepsy and metallic interference. Approach. A comparison between the resulting positions of ECDs was performed: without removing metallic interference; rejecting only channels with large metallic artifacts; and after BSS-based reduction. Measures of dispersion and distance of ECDs were defined to analyze the results. Main results. The relationship between the artifact-to-signal ratio and ECD fitting showed that higher values of metallic interference produced highly scattered dipoles. Results revealed a significant reduction on dispersion using the BSS-based reduction procedure, yielding feasible locations of ECDs in contrast to the other two approaches. Significance. The automatic BSS-based method can be applied to MEG datasets affected by metallic artifacts as a processing step to improve the localization of epileptic foci.
Leak localization and quantification with a small unmanned aerial system
NASA Astrophysics Data System (ADS)
Golston, L.; Zondlo, M. A.; Frish, M. B.; Aubut, N. F.; Yang, S.; Talbot, R. W.
2017-12-01
Methane emissions from oil and gas facilities are a recognized source of greenhouse gas emissions, requiring cost-effective and reliable monitoring systems to support leak detection and repair programs. We describe a set of methods for locating and quantifying natural gas leaks using a small unmanned aerial system (sUAS) equipped with a path-integrated methane sensor along with ground-based wind measurements. The algorithms are developed as part of a system for continuous well pad scale (100 m2 area) monitoring, supported by a series of over 200 methane release trials covering multiple release locations and flow rates. Test measurements include data obtained on a rotating boom platform as well as flight tests on a sUAS. The system is found throughout the trials to reliably distinguish between cases with and without a methane release down to 6 scfh (0.032 g/s). Among several methods evaluated for horizontal localization, the location corresponding to the maximum integrated methane reading have performed best with a median error of ± 1 m if two or more flights are averaged, or ± 1.2 m for individual flights. Additionally, a method of rotating the data around the estimated leak location is developed, with the leak magnitude calculated as the average crosswind integrated flux in the region near the source location. Validation of these methods will be presented, including blind test results. Sources of error, including GPS uncertainty, meteorological variables, and flight pattern coverage, will be discussed.
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1996-01-01
This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.
Warm Homes, Healthy People Fund 2011/12: a mixed methods evaluation.
Madden, V; Carmichael, C; Petrokofsky, C; Murray, V
2014-03-01
To assess how the Warm Homes Healthy People Fund 2011/12 was used by English local authorities and their partners to tackle excess winter mortality. Mixed-methods evaluation. Three sources of data were used: an online survey to local authority leads, document analysis of local evaluation reports and telephone interviews of local leads. These were analysed to provide numerical estimates, key themes and case studies. There was universal approval of the fund, with all survey respondents requesting the fund to continue. An estimated 130,000 to 200,000 people in England (62% of them elderly) received a wide range of interventions, including structural interventions (such as loft insulation), provision of warm goods and income maximization. Raising awareness was another component, with all survey respondents launching a local media campaign. Strong local partnerships helped to facilitate the implementation of projects. The speed of delivery may have resulted in less strategic targeting of the most vulnerable residents. The Fund was popular and achieved much in winter 2011/2012, although its impact on cold-related morbidity and mortality is unknown. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger
2017-09-01
New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations
Akdeniz, Gülsüm
2016-01-01
Background: Few studies have been conducted that have compared electrical source localization (ESL) results obtained by analyzing ictal patterns in scalp electroencephalogram (EEG) with the brain areas that are found to be responsible for seizures using other brain imaging techniques. Additionally, adequate studies have not been performed to confirm the accuracy of ESL methods. Materials and Methods: In this study, ESL was conducted using LORETA (Low Resolution Brain Electromagnetic Tomography) in 9 patients with lesions apparent on magnetic resonance imaging (MRI) and in 6 patients who did not exhibit lesions on their MRIs. EEGs of patients who underwent surgery for epilepsy and had follow-ups for at least 1 year after operations were analyzed for ictal spike, rhythmic, paroxysmal fast, and obscured EEG activities. Epileptogenic zones identified in postoperative MRIs were then compared with localizations obtained by LORETA model we employed. Results: We found that brain areas determined via ESL were in concordance with resected brain areas for 13 of the 15 patients evaluated, and those 13 patients were post-operatively determined as being seizure-free. Conclusion: ESL, which is a noninvasive technique, may contribute to the correct delineation of epileptogenic zones in patients who will eventually undergo surgery to treat epilepsy, (regardless of neuroimaging status). Moreover, ESL may aid in deciding on the number and localization of intracranial electrodes to be used in patients who are candidates for invasive recording. PMID:27011626
Andersson, Camilla M; Bjärås, Gunilla E M; Tillgren, Per; Ostenson, Claes-Göran
2003-09-01
This article presents an instrument to study the annual reporting of health promotion activities in local governments within the three intervention municipalities of the Stockholm Diabetes Prevention Program (SDPP). The content of health promotion activities are described and the strengths, weaknesses and relevance of the method to health promotion discussed. A content analysis of local governmental reports from 1995-2000 in three Swedish municipalities. A matrix with WHO's 38 'Health for All' (HFA) targets from 1991 was used when coding the local health promotion activities. There are many public health initiatives within the local governmental structure even if they are not always addressed as health promotion. The main focuses in the local governmental reports were environmental issues, unemployment, social care and welfare. Local governmental reports were found to be a useful source of information that could provide knowledge about the priorities and organizational capacities for health promotion within local authorities. Additionally the HFA targets were an effective tool to identify and categorize systematically local health promotion activities in the annual reports of local governments. Identifying local health promotion initiatives by local authorities may ease the development of a health perspective and joint actions within the existing political and administrative structure. This paper provides a complementary method of attaining and structuring information about the local community for developments in health promotion.
Imaging of heart acoustic based on the sub-space methods using a microphone array.
Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo
2017-07-01
Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less
Abdallah, Chifaou; Maillard, Louis G; Rikir, Estelle; Jonas, Jacques; Thiriaux, Anne; Gavaret, Martine; Bartolomei, Fabrice; Colnat-Coulbois, Sophie; Vignal, Jean-Pierre; Koessler, Laurent
2017-01-01
We aimed to prospectively assess the anatomical concordance of electric source localizations of interictal discharges with the epileptogenic zone (EZ) estimated by stereo-electroencephalography (SEEG) according to different subgroups: the type of epilepsy, the presence of a structural MRI lesion, the aetiology and the depth of the EZ. In a prospective multicentric observational study, we enrolled 85 consecutive patients undergoing pre-surgical SEEG investigation for focal drug-resistant epilepsy. Electric source imaging (ESI) was performed before SEEG. Source localizations were obtained from dipolar and distributed source methods. Anatomical concordance between ESI and EZ was defined according to 36 predefined sublobar regions. ESI was interpreted blinded to- and subsequently compared with SEEG estimated EZ. 74 patients were finally analyzed. 38 patients had temporal and 36 extra-temporal lobe epilepsy. MRI was positive in 52. 41 patients had malformation of cortical development (MCD), 33 had another or an unknown aetiology. EZ was medial in 27, lateral in 13, and medio-lateral in 34. In the overall cohort, ESI completely or partly localized the EZ in 85%: full concordance in 13 cases and partial concordance in 50 cases. The rate of ESI full concordance with EZ was significantly higher in (i) frontal lobe epilepsy (46%; p = 0.05), (ii) cases of negative MRI (36%; p = 0.01) and (iii) MCD (27%; p = 0.03). The rate of ESI full concordance with EZ was not statistically different according to the depth of the EZ. We prospectively demonstrated that ESI more accurately estimated the EZ in subgroups of patients who are often the most difficult cases in epilepsy surgery: frontal lobe epilepsy, negative MRI and the presence of MCD.
Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian
2016-10-14
This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.
Hepburn, Emily; Northway, Anne; Bekele, Dawit; Liu, Gang-Jun; Currell, Matthew
2018-06-11
Determining sources of heavy metals in soils, sediments and groundwater is important for understanding their fate and transport and mitigating human and environmental exposures. Artificially imported fill, natural sediments and groundwater from 240 ha of reclaimed land at Fishermans Bend in Australia, were analysed for heavy metals and other parameters to determine the relative contributions from different possible sources. Fishermans Bend is Australia's largest urban re-development project, however, complicated land-use history, geology, and multiple contamination sources pose challenges to successful re-development. We developed a method for heavy metal source separation in groundwater using statistical categorisation of the data, analysis of soil leaching values and fill/sediment XRF profiling. The method identified two major sources of heavy metals in groundwater: 1. Point sources from local or up-gradient groundwater contaminated by industrial activities and/or legacy landfills; and 2. contaminated fill, where leaching of Cu, Mn, Pb and Zn was observed. Across the precinct, metals were most commonly sourced from a combination of these sources; however, eight locations indicated at least one metal sourced solely from fill leaching, and 23 locations indicated at least one metal sourced solely from impacted groundwater. Concentrations of heavy metals in groundwater ranged from 0.0001 to 0.003 mg/L (Cd), 0.001-0.1 mg/L (Cr), 0.001-0.2 mg/L (Cu), 0.001-0.5 mg/L (Ni), 0.001-0.01 mg/L (Pb), and 0.005-1.2 mg/L (Zn). Our method can determine the likely contribution of different metal sources to groundwater, helping inform more detailed contamination assessments and precinct-wide management and remediation strategies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Unstructured Adaptive Meshes: Bad for Your Memory?
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob
2003-01-01
This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
NASA Astrophysics Data System (ADS)
Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin
2018-02-01
Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.
3D source localization of interictal spikes in epilepsy patients with MRI lesions
NASA Astrophysics Data System (ADS)
Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin
2006-08-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.
NASA Astrophysics Data System (ADS)
Jankovic, I.; Barnes, R. J.; Soule, R.
2001-12-01
The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.
Local Explosion Monitoring using Rg
NASA Astrophysics Data System (ADS)
O'Rourke, C. T.; Baker, G. E.
2016-12-01
Rg is the high-frequency fundamental-mode Rayleigh wave, which is only excited by near-surface events. As such, an Rg detection indicates that a seismic source is shallow, generally less than a few km depending on the velocity structure, and so likely man-made. Conversely, the absence of Rg can indicate that the source is deeper and so likely naturally occurring. We have developed a new automated method of detecting Rg arrivals from various explosion sources at local distances, and a process for estimating the likelihood that a source is not shallow when no Rg is detected. Our Rg detection method scans the spectrogram of a seismic signal for a characteristic frequency peak. We test this on the Bighorn Arch Seismic Experiment data, which includes earthquakes, active source explosions in boreholes, and mining explosions recorded on a dense network that spans the Bighorn Mountains and Powder River Basin. The Rg passbands used were 0.4-0.8 Hz for mining blasts and 0.8-1.2 Hz for borehole shots. We successfully detect Rg across the full network for most mining blasts. The lower-yield shots are detectable out to 50 km. We achieve <1% false-positive rate for the small-magnitude earthquakes in the region. Rg detections on known non-shallow earthquake seismograms indicates they are largely due to windowing leakage at very close distances or occasionally to cultural noise. We compare our results to existing methods that use cross-correlation to detect retrograde motion of the surface waves. Our method shows more complete detection across the network, especially in the Powder River Basin where Rg exhibits prograde motion that does not trigger the existing detector. We also estimate the likelihood that Rg would have been detected from a surface source, based on the measured P amplitude. For example, an event with a large P wave and no detectable Rg would have a high probability of being a deeper event, whereas we cannot confidently determine whether an event with a small P wave and no Rg detection is shallow or not. These results allow us to detect Rg arrivals, which indicate a shallow source, and to use the absence of Rg to estimate the likelihood that a source in a calibrated region is not shallow enough to be man-made.
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
NASA Astrophysics Data System (ADS)
Qin, Y.; Oduyemi, K.
Anthropogenic aerosol (PM 10) emission sources sampled at an air quality monitoring station in Dundee have been analysed. However, the information on local natural aerosol emission sources was unavailable. A method that combines receptor model and atmospheric dispersion model was used to identify aerosol sources and estimate source contributions to air pollution. The receptor model identified five sources. These are aged marine aerosol source with some chlorine replaced by sulphate, secondary aerosol source of ammonium sulphate, secondary aerosol source of ammonium nitrate, soil and construction dust source, and incinerator and fuel oil burning emission source. For the vehicle emission source, which has been comprehensively described in the atmospheric emission inventory but cannot be identified by the receptor model, an atmospheric dispersion model was used to estimate its contributions. In Dundee, a significant percentage, 67.5%, of the aerosol mass sampled at the study station could be attributed to the six sources named above.
Gayton, Ivan; Theocharopoulos, Georgios; Edwards, Robin; Danis, Kostas; Kremer, Ronald; Kleijer, Karline; Tejan, Sumaila M.; Sankoh, Mohamed; Jimissa, Augustin; Greig, Jane; Caleo, Grazia
2018-01-01
Background During the 2014–16 Ebola virus disease (EVD) outbreak, the Magburaka Ebola Management Centre (EMC) operated by Médecins Sans Frontières (MSF) in Tonkolili District, Sierra Leone, identified that available district maps lacked up-to-date village information to facilitate timely implementation of EVD control strategies. In January 2015, we undertook a survey in chiefdoms within the MSF EMC catchment area to collect mapping and village data. We explore the feasibility and cost to mobilise a local community for this survey, describe validation against existing mapping sources and use of the data to prioritise areas for interventions, and lessons learned. Methods We recruited local people with self-owned Android smartphones installed with open-source survey software (OpenDataKit (ODK)) and open-source navigation software (OpenStreetMap Automated Navigation Directions (OsmAnd)). Surveyors were paired with local motorbike drivers to travel to eligible villages. The collected mapping data were validated by checking for duplication and comparing the village names against a pre-existing village name and location list using a geographic distance and text string-matching algorithm. Results The survey teams gained sufficient familiarity with the ODK and OsmAnd software within 1–2 hours. Nine chiefdoms in Tonkolili District and three in Bombali District were surveyed within two weeks. Following de-duplication, the surveyors collected data from 891 villages with an estimated 127,021 households. The overall survey cost was €3,395; €3.80 per village surveyed. The MSF GIS team (MSF-OCG) created improved maps for the MSF Magburaka EMC team which were used to support surveillance, investigation of suspect EVD cases, hygiene-kit distribution and EVD survivor support. We shared the mapping data with OpenStreetMap, the local Ministry of Health and Sanitation and Sierra Leone District and National Ebola Response Centres. Conclusions Involving local community and using accessible technology allowed rapid implementation, at moderate cost, of a survey to collect geographic and essential village information, and creation of updated maps. These methods could be used for future emergencies to facilitate response. PMID:29298314
Forecasting the Revenues of Local Public Health Departments in the Shadows of the "Great Recession".
Reschovsky, Andrew; Zahner, Susan J
2016-01-01
The ability of local health departments (LHD) to provide core public health services depends on a reliable stream of revenue from federal, state, and local governments. This study investigates the impact of the "Great Recession" on major sources of LHD revenues and develops a fiscal forecasting model to predict revenues to LHDs in one state over the period 2012 to 2014. Economic forecasting offers a new financial planning tool for LHD administrators and local government policy makers. This study represents a novel research application for these econometric methods. Detailed data on revenues by source for each LHD in Wisconsin were taken from annual surveys conducted by the Wisconsin Department of Health Services over an 8-year period (2002-2009). A forecasting strategy appropriate for each revenue source was developed resulting in "base case" estimates. An analysis of the sensitivity of these revenue forecasts to a set of alternative fiscal policies by the federal, state, and local governments was carried out. The model forecasts total LHD revenues in 2012 of $170.5 million (in 2010 dollars). By 2014, inflation-adjusted revenues will decline by $8 million, a reduction of 4.7%. Because of population growth, per capita real revenues of LHDs are forecast to decline by 6.6% between 2012 and 2014. There is a great deal of uncertainty about the future of federal funding in support of local public health. A doubling of the reductions in federal grants scheduled under current law would result in an additional $4.4 million decline in LHD revenues in 2014. The impact of the Great Recession continues to haunt LHDs. Multiyear revenue forecasting offers a new financial tool to help LHDs better plan for an environment of declining resources. New revenue sources are needed if sharp drops in public health service delivery are to be avoided.
Low-Cost Deposition Methods for Transparent Thin-Film Transistors
2003-09-26
theoretical limit is estimated to be ∼10 cm2/V s. [9] The largest organic TFT mobility reported is 2.7 cm2/V s for pentacene which is approaching the...refractory materials require the use of an electron beam. A directed electron beam is capable of locally heating source material to extremely high...Haboeck, M. Stassburg, M. Strassburg, G. Kaczmarczyk, A. Hoffman, and C. Thomsen, “Nitrogen-related local vibrational modes in ZnO:N,” Appl. Phys
Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.
Chen, Xin; Liu, Zhen; Wei, Xizhang
2017-05-11
Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.
Towards 3D Noise Source Localization using Matched Field Processing
NASA Astrophysics Data System (ADS)
Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.
2017-12-01
The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.
Reconstructing the metric of the local Universe from number counts observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallejo, Sergio Andres; Romano, Antonio Enea, E-mail: antonio.enea.romano@cern.ch
Number counts observations available with new surveys such as the Euclid mission will be an important source of information about the metric of the Universe. We compute the low red-shift expansion for the energy density and the density contrast using an exact spherically symmetric solution in presence of a cosmological constant. At low red-shift the expansion is more precise than linear perturbation theory prediction. We then use the local expansion to reconstruct the metric from the monopole of the density contrast. We test the inversion method using numerical calculations and find a good agreement within the regime of validity ofmore » the red-shift expansion. The method could be applied to observational data to reconstruct the metric of the local Universe with a level of precision higher than the one achievable using perturbation theory.« less
Local systematic differences in 2MASS positions
NASA Astrophysics Data System (ADS)
Bustos Fierro, I. H.; Calderón, J. H.
2018-01-01
We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.
Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.
2014-01-01
Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516
NASA Astrophysics Data System (ADS)
Spada, M.; Bianchi, I.; Kissling, E.; Agostinetti, N. Piana; Wiemer, S.
2013-08-01
The accurate definition of 3-D crustal structures and, in primis, the Moho depth, are the most important requirement for seismological, geophysical and geodynamic modelling in complex tectonic regions. In such areas, like the Mediterranean region, various active and passive seismic experiments are performed, locally reveal information on Moho depth, average and gradient crustal Vp velocity and average Vp/Vs velocity ratios. Until now, the most reliable information on crustal structures stems from controlled-source seismology experiments. In most parts of the Alpine region, a relatively large number of controlled-source seismology information are available though the overall coverage in the central Mediterranean area is still sparse due to high costs of such experiments. Thus, results from other seismic methodologies, such as local earthquake tomography, receiver functions and ambient noise tomography can be used to complement the controlled-source seismology information to increase coverage and thus the quality of 3-D crustal models. In this paper, we introduce a methodology to directly combine controlled-source seismology and receiver functions information relying on the strengths of each method and in relation to quantitative uncertainty estimates for all data to derive a well resolved Moho map for Italy. To obtain a homogeneous elaboration of controlled-source seismology and receiver functions results, we introduce a new classification/weighting scheme based on uncertainty assessment for receiver functions data. In order to tune the receiver functions information quality, we compare local receiver functions Moho depths and uncertainties with a recently derived well-resolved local earthquake tomography-derived Moho map and with controlled-source seismology information. We find an excellent correlation in the Moho information obtained by these three methodologies in Italy. In the final step, we interpolate the controlled-source seismology and receiver functions information to derive the map of Moho topography in Italy and surrounding regions. Our results show high-frequency undulation in the Moho topography of three different Moho interfaces, the European, the Adriatic-Ionian, and the Liguria-Corsica-Sardinia-Tyrrhenia, reflecting the complexity of geodynamical evolution.
Adaptive behaviors in multi-agent source localization using passive sensing.
Shaukat, Mansoor; Chitre, Mandar
2016-12-01
In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Kim, Wansun; Park, Hun-Kuk; Choi, Samjin
2017-03-01
This study investigates why a silver nanoparticle (SNP)-induced surface-enhanced Raman scattering (SERS) paper chip fabricated at low successive ionic layer absorption and reaction (SILAR) cycles leads to a high SERS enhancement factor (7 × 108) with an inferior nanostructure and without generating a hot spot effect. The multi-layered structure of SNPs on cellulose fibers, verified by magnified scanning electron microscopy (SEM) and analyzed by a computational simulation method, was hypothesized as the reason. The pattern of simulated local electric field distribution with respect to the number of SILAR cycles showed good agreement with the experimental Raman intensity, regardless of the wavelength of the excitation laser sources. The simulated enhancement factor at the 785-nm excitation laser source (2.8 × 109) was 2.5 times greater than the experimental enhancement factor (1.1 × 109). A 532-nm excitation laser source exhibited the highest maximum local electric field intensity (1.9 × 1011), particularly at the interparticle gap called a hot spot. The short wavelength led to a strong electric field intensity caused by strong electromagnetic coupling arising from the SNP-induced local surface plasmon resonance (LSPR) effects through high excitation energy. These findings suggest that our paper-based SILAR-fabricated SNP-induced LSPR model is valid for understanding SNP-induced LSPR effects.
Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
Huang, Wenzhu; Zhang, Wentao; Li, Fang
2013-01-01
This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
Tenke, Craig E.; Kayser, Jürgen
2012-01-01
The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039
Recov'Heat: An estimation tool of urban waste heat recovery potential in sustainable cities
NASA Astrophysics Data System (ADS)
Goumba, Alain; Chiche, Samuel; Guo, Xiaofeng; Colombert, Morgane; Bonneau, Patricia
2017-02-01
Waste heat recovery is considered as an efficient way to increase carbon-free green energy utilization and to reduce greenhouse gas emission. Especially in urban area, several sources such as sewage water, industrial process, waste incinerator plants, etc., are still rarely explored. Their integration into a district heating system providing heating and/or domestic hot water could be beneficial for both energy companies and local governments. EFFICACITY, a French research institute focused on urban energy transition, has developed an estimation tool for different waste heat sources potentially explored in a sustainable city. This article presents the development method of such a decision making tool which, by giving both energetic and economic analysis, helps local communities and energy service companies to make preliminary studies in heat recovery projects.
Method for surface plasmon amplification by stimulated emission of radiation (SPASER)
Stockman, Mark I [Atlanta, GA; Bergman, David J [Ramat Hasharon, IL
2011-09-13
A nanostructure is used to generate a highly localized nanoscale optical field. The field is excited using surface plasmon amplification by stimulated emission of radiation (SPASER). The SPASER radiation consists of surface plasmons that undergo stimulated emission, but in contrast to photons can be localized within a nanoscale region. A SPASER can incorporate an active medium formed by two-level emitters, excited by an energy source, such as an optical, electrical, or chemical energy source. The active medium may be quantum dots, which transfer excitation energy by radiationless transitions to a resonant nanosystem that can play the same role as a laser cavity in a conventional laser. The transitions are stimulated by the surface plasmons in the nanostructure, causing the buildup of a macroscopic number of surface plasmons in a single mode.
Measurement of the local food environment: a comparison of existing data sources.
Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S
2010-03-01
Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.
Tomaszewski, Michał; Ruszczak, Bogdan; Michalski, Paweł
2018-06-01
Electrical insulators are elements of power lines that require periodical diagnostics. Due to their location on the components of high-voltage power lines, their imaging can be cumbersome and time-consuming, especially under varying lighting conditions. Insulator diagnostics with the use of visual methods may require localizing insulators in the scene. Studies focused on insulator localization in the scene apply a number of methods, including: texture analysis, MRF (Markov Random Field), Gabor filters or GLCM (Gray Level Co-Occurrence Matrix) [1], [2]. Some methods, e.g. those which localize insulators based on colour analysis [3], rely on object and scene illumination, which is why the images from the dataset are taken under varying lighting conditions. The dataset may also be used to compare the effectiveness of different methods of localizing insulators in images. This article presents high-resolution images depicting a long rod electrical insulator under varying lighting conditions and against different backgrounds: crops, forest and grass. The dataset contains images with visible laser spots (generated by a device emitting light at the wavelength of 532 nm) and images without such spots, as well as complementary data concerning the illumination level and insulator position in the scene, the number of registered laser spots, and their coordinates in the image. The laser spots may be used to support object-localizing algorithms, while the images without spots may serve as a source of information for those algorithms which do not need spots to localize an insulator.
Local and Widely Distributed EEG Activity in Schizophrenia With Prevalence of Negative Symptoms.
Grin-Yatsenko, Vera A; Ponomarev, Valery A; Pronina, Marina V; Poliakov, Yury I; Plotnikova, Irina V; Kropotov, Juri D
2017-09-01
We evaluated EEG frequency abnormalities in resting state (eyes closed and eyes open) EEG in a group of chronic schizophrenia patients as compared with healthy subjects. The study included 3 methods of analysis of deviation of EEG characteristics: genuine EEG, current source density (CSD), and group independent component (gIC). All 3 methods have shown that the EEG in schizophrenia patients is characterized by enhanced low-frequency (delta and theta) and high-frequency (beta) activity in comparison with the control group. However, the spatial pattern of differences was dependent on the type of method used. Comparative analysis has shown that increased EEG power in schizophrenia patients apparently concerns both widely spatially distributed components and local components of signal. Furthermore, the observed differences in the delta and theta range can be described mainly by the local components, and those in the beta range mostly by spatially widely distributed ones. The possible nature of the widely distributed activity is discussed.
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.
Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.
Spectral method for the static electric potential of a charge density in a composite medium
NASA Astrophysics Data System (ADS)
Bergman, David J.; Farhi, Asaf
2018-04-01
A spectral representation for the static electric potential field in a two-constituent composite medium is presented. A theory is developed for calculating the quasistatic eigenstates of Maxwell's equations for such a composite. The local physical potential field produced in the system by a given source charge density is expanded in this set of orthogonal eigenstates for any position r. The source charges can be located anywhere, i.e., inside any of the constituents. This is shown to work even if the eigenfunctions are normalized in an infinite volume. If the microstructure consists of a cluster of separate inclusions in a uniform host medium, then the quasistatic eigenstates of all the separate isolated inclusions can be used to calculate the eigenstates of the total structure as well as the local potential field. Once the eigenstates are known for a given host and a given microstructure, then calculation of the local field only involves calculating three-dimensional integrals of known functions and solving sets of linear algebraic equations.
MUSIC for localization of thunderstorm cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Rynne, T.M.
1993-12-31
Lightning represents an event detectable optically, electrically, and acoustically, and several systems are already in place to monitor such activity. Unfortunately, such detection of lightning can occur too late, since operations need to be protected in advance of the first lightning strike. Additionally, the bolt itself can traverse several kilometers before striking the ground, leaving a large region of uncertainty as to the center of the storm and its possible strike regions. NASA Kennedy Space Center has in place an array of electric field mills that monitor the (effectively) DC electric field. Prior to the first lightning strike, the surfacemore » electric fields rise as the storm generator within a thundercloud begins charging. Extending methods we developed for an analogous source localization problem in mangnetoencephalography, we present Cramer-Rao lower bounds and MUSIC scans for fitting a point-charge source model to the electric field mill data. Such techniques can allow for the identification and localization of charge centers in cloud structures.« less
Mysterud, Atle; Tryjanowski, Piotr; Panek, Marek
2006-01-01
Harvesting represents a major source of mortality in many deer populations. The extent to which harvesting is selective for specific traits is important in order to understand contemporary evolutionary processes. In addition, since such data are frequently used in life-history studies, it is important to know the pattern of selectivity as a source of bias. Recently, it was demonstrated that different hunting methods were selected for different weights in red deer (Cervus elaphus), but little insight was offered into why this occurs. In this study, we show that foreign trophy stalkers select for larger antlers when hunting roe deer (Capreolus capreolus) than local hunters, but that close to half of the difference in selectivity was due to foreigners hunting earlier in the season and in locations with larger males. The relationship between antler size and age was nevertheless fairly similar based on whether deer was shot by foreign or local hunters. PMID:17148307
Size Matters: What Are the Characteristic Source Areas for Urban Planning Strategies?
Fan, Chao; Myint, Soe W.; Wang, Chenghao
2016-01-01
Urban environmental measurements and observational statistics should reflect the properties generated over an adjacent area of adequate length where homogeneity is usually assumed. The determination of this characteristic source area that gives sufficient representation of the horizontal coverage of a sensing instrument or the fetch of transported quantities is of critical importance to guide the design and implementation of urban landscape planning strategies. In this study, we aim to unify two different methods for estimating source areas, viz. the statistical correlation method commonly used by geographers for landscape fragmentation and the mechanistic footprint model by meteorologists for atmospheric measurements. Good agreement was found in the intercomparison of the estimate of source areas by the two methods, based on 2-m air temperature measurement collected using a network of weather stations. The results can be extended to shed new lights on urban planning strategies, such as the use of urban vegetation for heat mitigation. In general, a sizable patch of landscape is required in order to play an effective role in regulating the local environment, proportional to the height at which stakeholders’ interest is mainly concerned. PMID:27832111
California School Accounting Manual, 1988 Edition.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
This report presents the procedure for the accounting methods employed by California school districts for income and expenditures in instructional and support programs. The report has seven parts: (1) an introduction to accounting in local educational agencies; (2) general and subsidiary ledger accounting; (3) revenues and other financing sources;…
Impact of Discontinued Obstetrical Services in Rural Missouri: 1990-2002
ERIC Educational Resources Information Center
Sontheimer, Dan; Halverson, Larry W.; Bell, Laird; Ellis, Mark; Bunting, Pamela Wilbanks
2008-01-01
Purpose: This study examines the potential relationship between loss of local obstetrical services and pregnancy outcomes. Methods: Missouri Hospital Association and Missouri Department of Health birth certificate records were used as sources of information. All member hospitals of the Missouri Hospital Association that were located in cities of…
I. B. I. S.: Industry and Business Information Source.
ERIC Educational Resources Information Center
Ashley, Edwin M.
1986-01-01
Explains some of the potential resources community college libraries and media centers can provide to the local business community (i.e., media production and information services). Describes the methods used by Middlesex County College to assess and tap the potential market for its information services. Includes marketing materials. (DMM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
On simulation of local fluxes in molecular junctions
NASA Astrophysics Data System (ADS)
Cabra, Gabriel; Jensen, Anders; Galperin, Michael
2018-05-01
We present a pedagogical review of the current density simulation in molecular junction models indicating its advantages and deficiencies in analysis of local junction transport characteristics. In particular, we argue that current density is a universal tool which provides more information than traditionally simulated bond currents, especially when discussing inelastic processes. However, current density simulations are sensitive to the choice of basis and electronic structure method. We note that while discussing the local current conservation in junctions, one has to account for the source term caused by the open character of the system and intra-molecular interactions. Our considerations are illustrated with numerical simulations of a benzenedithiol molecular junction.
Phillips, Jeffrey D.
2002-01-01
In 1997, the U.S. Geological Survey (USGS) contracted with Sial Geosciences Inc. for a detailed aeromagnetic survey of the Santa Cruz basin and Patagonia Mountains area of south-central Arizona. The contractor's Operational Report is included as an Appendix in this report. This section describes the data processing performed by the USGS on the digital aeromagnetic data received from the contractor. This processing was required in order to remove flight line noise, estimate the depths to the magnetic sources, and estimate the locations of the magnetic contacts. Three methods were used for estimating source depths and contact locations: the horizontal gradient method, the analytic signal method, and the local wavenumber method. The depth estimates resulting from each method are compared, and the contact locations are combined into an interpretative map showing the dip direction for some contacts.
NASA Astrophysics Data System (ADS)
Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu
2009-07-01
Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.
Shivkumar, Kalyanam; Ellenbogen, Kenneth A.; Hummel, John D.; Miller, John M.; Steinberg, Jonathan S.
2012-01-01
Catheter ablation of atrial fibrillation (AF) currently relies on eliminating triggers, and no reliable method exists to map the arrhythmia itself to identify ablation targets. The aim of this multicenter study was to define the use of Focal Impulse and Rotor Modulation (FIRM) for identifying ablation targets. METHODS We prospectively enrolled the first (n=14, 11 males) consecutive patients undergoing FIRM guided ablation for persistent (n=11) and paroxysmal AF at 5 centers. A 64 pole basket catheter was used for panoramic right and left atrial mapping during AF. AF electrograms were analyzed using a novel system to identify sustained rotors (spiral waves), or focal beats (centrifugal activation to surrounding atrium). Ablation was performed first at identified sources. The primary endpoints were acute AF termination or organization (>10 % cycle length prolongation). Conventional ablation was performed only after FIRM guided ablation. RESULTS 12/14 cases were mapped. AF sources were demonstrated in all patients (average of 1.9±0.8 per patient). Sources were left atrial in 18 cases, and right atrial in 5 cases, and 21/23 were rotors. FIRM guided ablation achieved the acute endpoint in all patients, consisting of AF termination in n=8 (4.9±3.9 min at the primary source), and organization in n=4. Total FIRM time for all patients was 12.3±8.6 min. CONCLUSIONS FIRM guided ablation revealed localized AF rotors/focal sources in patients with paroxysmal, persistent and longstanding persistent AF. Brief targeted FIRM guided ablation at a priori identified sites terminated or substantially organized AF in all cases prior to any other ablation. PMID:23130890
How Connecticut Health Directors Deal With Public Health Budget Cuts at the Local Level
Prust, Margaret L.; Clark, Kathleen; Davis, Brigette; Pallas, Sarah W.; Kertanis, Jennifer; O’Keefe, Elaine; Araas, Michael; Iyer, Neel S.; Dandorf, Stewart; Platis, Stephanie
2015-01-01
Objectives. We investigated the perspectives of local health jurisdiction (LHJ) directors on coping mechanisms used to respond to budget reductions and constraints on their decision-making. Methods. We conducted in-depth interviews with 17 LHJ directors. Interviews were audio recorded, transcribed, and analyzed using the constant comparative method. Results. LHJ directors use a range of coping mechanisms, including identifying alternative revenue sources, adjusting services, amending staffing arrangements, appealing to local political leaders, and forming strategic partnerships. LHJs also face constraints on their decision-making because of state and local statutory requirements, political priorities, pressures from other LHJs, and LHJ structure. Conclusions. LHJs respond creatively to budget cuts to maintain important public health services. Some LHJ adjustments to administrative resources may obscure the long-term costs of public health budget cuts in such areas as staff morale and turnover. Not all coping strategies are available to each LHJ because of the contextual constraints of its locality, pointing to important policy questions on identifying optimum jurisdiction size and improving efficiency. PMID:25689206
Global Qualitative Flow-Path Modeling for Local State Determination in Simulation and Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Fleming, Land D. (Inventor)
1998-01-01
For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components.
NASA Astrophysics Data System (ADS)
Chernov, N. N.; Zagray, N. P.; Laguta, M. V.; Varenikova, A. Yu
2018-05-01
The article describes the research of the method of localization and determining the size of heterogeneity in biological tissues. The equation for the acoustic harmonic wave, which propagates in the positive direction, is taken as the main one. A three-dimensional expression that describes the field of secondary sources at the observation point is obtained. The simulation of the change of the amplitude values of the vibrational velocity of the second harmonic of the acoustic wave at different coordinates of the inhomogeneity location in three-dimensional space is carried out. For the convenience of mathematical calculations, the area of heterogeneity is reduced to a point.
Las Vegas Basin Seismic Response Project: Measured Shallow Soil Velocities
NASA Astrophysics Data System (ADS)
Luke, B. A.; Louie, J.; Beeston, H. E.; Skidmore, V.; Concha, A.
2002-12-01
The Las Vegas valley in Nevada is a deep (up to 5 km) alluvial basin filled with interlayered gravels, sands, and clays. The climate is arid. The water table ranges from a few meters to many tens of meters deep. Laterally extensive thin carbonate-cemented lenses are commonly found across parts of the valley. Lenses range beyond 2 m in thickness, and occur at depths exceeding 200 m. Shallow seismic datasets have been collected at approximately ten sites around the Las Vegas valley, to characterize shear and compression wave velocities in the near surface. Purposes for the surveys include modeling of ground response to dynamic loads, both natural and manmade, quantification of soil stiffness to aid structural foundation design, and non-intrusive materials identification. Borehole-based measurement techniques used include downhole and crosshole, to depths exceeding 100 m. Surface-based techniques used include refraction and three different methods involving inversion of surface-wave dispersion datasets. This latter group includes two active-source techniques, the Spectral Analysis of Surface Waves (SASW) method and the Multi-Channel Analysis of Surface Waves (MASW) method; and a new passive-source technique, the Refraction Mictrotremor (ReMi) method. Depths to halfspace for the active-source measurements ranged beyond 50 m. The passive-source method constrains shear wave velocities to 100 m depths. As expected, the stiff cemented layers profoundly affect local velocity gradients. Scale effects are evident in comparisons of (1) very local measurements typified by borehole methods, to (2) the broader coverage of the SASW and MASW measurements, to (3) the still broader and deeper resolution made possible by the ReMi measurements. The cemented layers appear as sharp spikes in the downhole datasets and are problematic in crosshole measurements due to refraction. The refraction method is useful only to locate the depth to the uppermost cemented layer. The surface-wave methods, on the other hand, can process velocity inversions. With the broader coverage of the active-source surface wave measurements, through careful inversion that takes advantage of prior information to the greatest extent possible, multiple, shallow, stiff layers can be resolved. Data from such broader-coverage methods also provide confidence regarding continuity of the cemented layers. For the ReMi measurements, which provide the broadest coverage of all methods used, the more generalized shallow profile is sometimes characterized by a strong stiffness inversion at a depth of approximately 10 m. We anticipate that this impedance contrast represents the vertical extent of the multiple layered deposits of cemented media.
Localized surface plasmon resonance mercury detection system and methods
James, Jay; Lucas, Donald; Crosby, Jeffrey Scott; Koshland, Catherine P.
2016-03-22
A mercury detection system that includes a flow cell having a mercury sensor, a light source and a light detector is provided. The mercury sensor includes a transparent substrate and a submonolayer of mercury absorbing nanoparticles, e.g., gold nanoparticles, on a surface of the substrate. Methods of determining whether mercury is present in a sample using the mercury sensors are also provided. The subject mercury detection systems and methods find use in a variety of different applications, including mercury detecting applications.
Present Kinematic Regime and Recent Seismicity of Gulf Suez, Egypt
NASA Astrophysics Data System (ADS)
Mohamed, G.-E. A.; Abd El-Aal, A. K.
2018-01-01
In this study we computed recent seismicity and present kinematic regime in the northern and middle zones of Gulf of Suez as inferred from moment tensor settlings and focal mechanism of local earthquakes that happened in this region. On 18 and 22 of July, 2014 two moderate size earthquakes of local magnitudes 4.2 and 4.1 struck the northern zone of Gulf of Suez near Suez City. These events are instrumentally recorded by Egyptian National Seismic Network (ENSN). The earthquakes have been felt at Suez City and greater Cairo metropolitan zone while no losses were reported. The source mechanism and source parameters of the calculated events were considered by the near-source waveform data listed at very broadband stations of ENSN and supported by the P-wave polarity data of short period stations. The new settling method and software used deem the action of the source time function, which has been ignored in most of the program series of the moment tensor settling analysis with near source seismograms. The obtained results from settling technique indicate that the estimated seismic moments of both earthquakes are 0.6621E + 15 and 0.4447E + 15 Nm conforming to a moment magnitude Mw 3.8 and 3.7 respectively. The fault plan settlings obtained from both settling technique and polarity of first-arrival indicate the dominance of normal faulting. We also evaluated the stress field in north and middle zones of Gulf of Suez using a multiple inverse method. The prime strain axis shows that the deformation is taken up mainly as stretching in the E-W and NE-SW direction.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
NASA Astrophysics Data System (ADS)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.
2015-10-01
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.
Colindres, Romulo E; Jain, Seema; Bowen, Anna; Mintz, Eric; Domond, Polyana
2007-09-01
Tropical Storm Jeanne struck Haiti in September 2004, causing widespread flooding which contaminated water sources, displaced thousands of families and killed approximately 2,800 people. Local leaders distributed PūR, a flocculent-disinfectant product for household water treatment, to affected populations. We evaluated knowledge, attitudes, practices, and drinking water quality among a sample of PūR recipients. We interviewed representatives of 100 households in three rural communities who received PūR and PūR-related education. Water sources were tested for fecal contamination and turbidity; stored household water was tested for residual chlorine. All households relied on untreated water sources (springs [66%], wells [15%], community taps [13%], and rivers [6%]). After distribution, PūR was the most common in-home treatment method (58%) followed by chlorination (30%), plant-based flocculation (6%), boiling (5%), and filtration (1%). Seventy-eight percent of respondents correctly answered five questions about how to use PūR; 81% reported PūR easy to use; and 97% reported that PūR-treated water appears, tastes, and smells better than untreated water. Although water sources tested appeared clear, fecal coliform bacteria were detected in all sources (range 1 - >200 cfu/100 ml). Chlorine was present in 10 (45%) of 22 stored drinking water samples in households using PūR. PūR was well-accepted and properly used in remote communities where local leaders helped with distribution and education. This highly effective water purification method can help protect disaster-affected communities from waterborne disease.
A Parallel Fast Sweeping Method for the Eikonal Equation
NASA Astrophysics Data System (ADS)
Baker, B.
2017-12-01
Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.
Joint Inversion of Source Location and Source Mechanism of Induced Microseismics
NASA Astrophysics Data System (ADS)
Liang, C.
2014-12-01
Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.
2008-12-01
A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.
NASA Astrophysics Data System (ADS)
Eftekhari, T.; Berger, E.; Williams, P. K. G.; Blanchard, P. K.
2018-06-01
The discovery of a repeating fast radio burst (FRB) has led to the first precise localization, an association with a dwarf galaxy, and the identification of a coincident persistent radio source. However, further localizations are required to determine the nature of FRBs, the sources powering them, and the possibility of multiple populations. Here we investigate the use of associated persistent radio sources to establish FRB counterparts, taking into account the localization area and the source flux density. Due to the lower areal number density of radio sources compared to faint optical sources, robust associations can be achieved for less precise localizations as compared to direct optical host galaxy associations. For generally larger localizations that preclude robust associations, the number of candidate hosts can be reduced based on the ratio of radio-to-optical brightness. We find that confident associations with sources having a flux density of ∼0.01–1 mJy, comparable to the luminosity of the persistent source associated with FRB 121102 over the redshift range z ≈ 0.1–1, require FRB localizations of ≲20″. We demonstrate that even in the absence of a robust association, constraints can be placed on the luminosity of an associated radio source as a function of localization and dispersion measure (DM). For DM ≈1000 pc cm‑3, an upper limit comparable to the luminosity of the FRB 121102 persistent source can be placed if the localization is ≲10″. We apply our analysis to the case of the ASKAP FRB 170107, using optical and radio observations of the localization region. We identify two candidate hosts based on a radio-to-optical brightness ratio of ≳100. We find that if one of these is indeed associated with FRB 170107, the resulting radio luminosity (1029‑ 4 × 1030 erg s‑1 Hz‑1, as constrained from the DM value) is comparable to the luminosity of the FRB 121102 persistent source.
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
NASA Astrophysics Data System (ADS)
Garden, Christopher J.; Craw, Dave; Waters, Jonathan M.; Smith, Abigail
2011-12-01
Tracking and quantifying biological dispersal presents a major challenge in marine systems. Most existing methods for measuring dispersal are limited by poor resolution and/or high cost. Here we use geological data to quantify the frequency of long-distance dispersal in detached bull-kelp (Phaeophyceae: Durvillaea) in southern New Zealand. Geological resolution in this region is enhanced by the presence of a number of distinct and readily-identifiable geological terranes. We sampled 13,815 beach-cast bull-kelp plants across 130 km of coastline. Rocks were found attached to 2639 of the rafted plants, and were assigned to specific geological terranes (source regions) to quantify dispersal frequencies and distances. Although the majority of kelp-associated rock specimens were found to be locally-derived, a substantial number (4%) showed clear geological evidence of long-distance dispersal, several having travelled over 200 km from their original source regions. The proportion of local versus foreign clasts varied considerably between regions. While short-range dispersal clearly predominates, long-distance travel of detached bull-kelp plants is shown to be a common and ongoing process that has potential to connect isolated coastal populations. Geological analyses represent a cost-effective and powerful method for assigning large numbers of drifted macroalgae to their original source regions.
Siauve, N; Nicolas, L; Vollaire, C; Marchal, C
2004-12-01
This article describes an optimization process specially designed for local and regional hyperthermia in order to achieve the desired specific absorption rate in the patient. It is based on a genetic algorithm coupled to a finite element formulation. The optimization method is applied to real human organs meshes assembled from computerized tomography scans. A 3D finite element formulation is used to calculate the electromagnetic field in the patient, achieved by radiofrequency or microwave sources. Space discretization is performed using incomplete first order edge elements. The sparse complex symmetric matrix equation is solved using a conjugate gradient solver with potential projection pre-conditionning. The formulation is validated by comparison of calculated specific absorption rate distributions in a phantom to temperature measurements. A genetic algorithm is used to optimize the specific absorption rate distribution to predict the phases and amplitudes of the sources leading to the best focalization. The objective function is defined as the specific absorption rate ratio in the tumour and healthy tissues. Several constraints, regarding the specific absorption rate in tumour and the total power in the patient, may be prescribed. Results obtained with two types of applicators (waveguides and annular phased array) are presented and show the faculties of the developed optimization process.
NASA Astrophysics Data System (ADS)
Madlazim; Prastowo, T.; Supardiyono; Hardy, T.
2018-03-01
Monitoring of volcanoes has been an important issue for many purposes, particularly hazard mitigation. With regard to this, the aims of the present work are to estimate and analyse source parameters of a volcanic earthquake driven by recent magmatic events of Mount Agung in Bali island that occurred on September 28, 2017. The broadband seismogram data consisting of 3 local component waveforms were recorded by the IA network of 5 seismic stations: SRBI, DNP, BYJI, JAGI, and TWSI (managed by BMKG). These land-based observatories covered a full 4-quadrant region surrounding the epicenter. The methods used in the present study were seismic moment-tensor inversions, where the data were all analyzed to extract the parameters, namely moment magnitude, type of a volcanic earthquake indicated by percentages of seismic components: compensated linear vector dipole (CLVD), isotropic (ISO), double-couple (DC), and source depth. The results are given in the forms of variance reduction of 65%, a magnitude of M W 3.6, a CLVD of 40%, an ISO of 33%, a DC of 27% and a centroid-depth of 9.7 km. These suggest that the unusual earthquake was dominated by a vertical CLVD component, implying the dominance of uplift motion of magmatic fluid flow inside the volcano.
NASA Astrophysics Data System (ADS)
Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.
2013-12-01
Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid resolution of 0.5° (Lat) × 0.67° (Lon). WRF/CAMx with PSAT is applied to nested grids: 36-km × 36-km over China and 12-km × 12-km over northern China. These simulations are performed for 2006 and 2011. Beijing and northern Hebei are selected as representative receptor areas. Simulated surface concentrations by both models are evaluated with available observations in China. Focusing on inorganic aerosols (sulfate, nitrate and ammonium), preliminary SS results from GEOS-Chem/AIT at Beijing identify the top three major emission sectors to be agriculture, residential, and transportation in winter and agriculture, industry and power plant in summer. The top four source areas are northern Hebei, local, Neimenggu, and Liaoning in winter and northern Hebei, local, Shandong, and southern Hebei in summer. The synthesis of SS and SA for influential emission groups or areas from this work will provide a quantitative basis for emission control strategy development and policy making for PM2.5 control in China.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Do Local Contributions Affect the Efficacy of Public Primary Schools?
ERIC Educational Resources Information Center
Jimenez, Emmanuel; Paqueo, Vicente
1996-01-01
Uses cost, financial sources, and student achievement data from Philippine primary schools (financed primarily from central sources) to discover if financial decentralization leads to more efficient schools. Schools that rely more heavily on local sources (contributions from local school boards, municipal government, parent-teacher associations,…
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
How to Manual: How to Update and Enhance Your Local Source Water Protection Assessments
Describes opportunities for improving source water assessments performed under the Safe Drinking Water Act 1453. It includes: local delineations, potential contaminant source inventories, and susceptibility determinations of source water assessment.
NASA Astrophysics Data System (ADS)
Huang, Lei; Ban, Jie; Han, Yu Ting; Yang, Jie; Bi, Jun
2013-04-01
This study aims to identify key environmental risk sources contributing to water eutrophication and to suggest certain risk management strategies for rural areas. The multi-angle indicators included in the risk source assessment system were non-point source pollution, deficient waste treatment, and public awareness of environmental risk, which combined psychometric paradigm methods, the contingent valuation method, and personal interviews to describe the environmental sensitivity of local residents. Total risk values of different villages near Taihu Lake were calculated in the case study, which resulted in a geographic risk map showing which village was the critical risk source of Taihu eutrophication. The increased application of phosphorus (P) and nitrogen (N), loss vulnerability of pollutant, and a lack of environmental risk awareness led to more serious non-point pollution, especially in rural China. Interesting results revealed by the quotient between the scores of objective risk sources and subjective risk sources showed what should be improved for each study village. More environmental investments, control of agricultural activities, and promotion of environmental education are critical considerations for rural environmental management. These findings are helpful for developing targeted and effective risk management strategies in rural areas.
Focazio, M.J.; Speiran, G.K.
1993-01-01
The groundwater-flow system of the Virginia Coastal Plain consists of areally extensive and interconnected aquifers. Large, regionally coalescing cones of depression that are caused by large withdrawals of water are found in these aquifers. Local groundwater systems are affected by regional pumping, because of the interactions within the system of aquifers. Accordingly, these local systems are affected by regional groundwater flow and by spatial and temporal differences in withdrawals by various users. A geographic- information system was used to refine a regional groundwater-flow model around selected withdrawal centers. A method was developed in which drawdown maps that were simulated by the regional groundwater-flow model and the principle of superposition could be used to estimate drawdown at local sites. The method was applied to create drawdown maps in the Brightseat/Upper Potomac Aquifer for periods of 3, 6, 9, and 12 months for Chesapeake, Newport News, Norfolk, Portsmouth, Suffolk, and Virginia Beach, Virginia. Withdrawal rates were supplied by the individual localities and remained constant for each simulation period. This provides an efficient method by which the individual local groundwater users can determine the amount of drawdown produced by their wells in a groundwater system that is a water source for multiple users and that is affected by regional-flow systems.
Local public health agency funding: money begets money.
Bernet, Patrick Michael
2007-01-01
Local public health agencies are funded federal, state, and local revenue sources. There is a common belief that increases from one source will be offset by decreases in others, as when a local agency might decide it must increase taxes in response to lowered federal or state funding. This study tests this belief through a cross-sectional study using data from Missouri local public health agencies, and finds, instead, that money begets money. Local agencies that receive more from federal and state sources also raise more at the local level. Given the particular effectiveness of local funding in improving agency performance, these findings that nonlocal revenues are amplified at the local level, help make the case for higher public health funding from federal and state levels.
NASA Astrophysics Data System (ADS)
Sambath, P.; Pullepu, Bapuji; Kannan, R. M.
2018-04-01
The impact of thermal radiation on unsteady laminar free convective MHD flow of a incompressible viscous fluid passes through a vertically inclined plate under the persuade of heat source and sink is presented here.Plate surface is considered to have variable wall temperature. The fluid regarded as gray absorbing / emitting, but non dispersing medium. The periphery layer dimensionless equations that administer the flow are evaluated by a finite difference implicit method called Crank Nicolson method. Numerical solutions are carried out for velocity, temperature, local shear stress, heat transfer rate for various values of the parameters (Pr, λ, Δ M, Rd ) are presented.
Use of lidar for the evaluation of traffic-related urban pollution
NASA Astrophysics Data System (ADS)
Eichinger, William E.; Cooper, D. I.; Buttler, William T.; Cottingame, William; Tellier, Larry
1994-03-01
Lidar (Light Detection and Ranging) is demonstrated as a tool for the detection and tracking of sources of aerosol pollution. Existing elastic lidars have been used to demonstrate the potential of the application of this technology in urban areas. Data from several experiments is shown along with analysis methods used for interpretation of the data. The goal of the project is to develop a light-weight, low-cost, lidar system and data analysis methods which can be used by urban planners and local air quality managers. The ability to determine the sources, i.e., causes, of non-attainment may lead to more effective use of tax dollars. Future directions for the project are also discussed.
NASA Astrophysics Data System (ADS)
Thorpe, Andrew K.; Frankenberg, Christian; Thompson, David R.; Duren, Riley M.; Aubrey, Andrew D.; Bue, Brian D.; Green, Robert O.; Gerilowski, Konstantin; Krings, Thomas; Borchardt, Jakob; Kort, Eric A.; Sweeney, Colm; Conley, Stephen; Roberts, Dar A.; Dennison, Philip E.
2017-10-01
At local scales, emissions of methane and carbon dioxide are highly uncertain. Localized sources of both trace gases can create strong local gradients in its columnar abundance, which can be discerned using absorption spectroscopy at high spatial resolution. In a previous study, more than 250 methane plumes were observed in the San Juan Basin near Four Corners during April 2015 using the next-generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) and a linearized matched filter. For the first time, we apply the iterative maximum a posteriori differential optical absorption spectroscopy (IMAP-DOAS) method to AVIRIS-NG data and generate gas concentration maps for methane, carbon dioxide, and water vapor plumes. This demonstrates a comprehensive greenhouse gas monitoring capability that targets methane and carbon dioxide, the two dominant anthropogenic climate-forcing agents. Water vapor results indicate the ability of these retrievals to distinguish between methane and water vapor despite spectral interference in the shortwave infrared. We focus on selected cases from anthropogenic and natural sources, including emissions from mine ventilation shafts, a gas processing plant, tank, pipeline leak, and natural seep. In addition, carbon dioxide emissions were mapped from the flue-gas stacks of two coal-fired power plants and a water vapor plume was observed from the combined sources of cooling towers and cooling ponds. Observed plumes were consistent with known and suspected emission sources verified by the true color AVIRIS-NG scenes and higher-resolution Google Earth imagery. Real-time detection and geolocation of methane plumes by AVIRIS-NG provided unambiguous identification of individual emission source locations and communication to a ground team for rapid follow-up. This permitted verification of a number of methane emission sources using a thermal camera, including a tank and buried natural gas pipeline.
Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu
2015-05-01
Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.
NASA Astrophysics Data System (ADS)
Yuan, Zibing
Despite continuous efforts paid on pollution control by the Hong Kong (HK) environmental authorities in the past decade, the air pollution in HK has been deteriorating in recent years. In this thesis work a variety of observation-based approaches were applied to analyze the air pollutant monitoring data in HK and the Pearl River Delta (PRD) area. The two major pollutants of interest are ozone and respirable suspended particulate (RSP, or PM10), which exceed the Air Quality Objective more frequently. Receptor models serve as powerful tools for source identification, estimation of source contributions, and source localization when incorporated with wind profiles. This thesis work presents the first-ever application of two advanced receptor-models, positive matrix factorization (PMT) and Unmix, on the PM10 and VOCs speciation data in HK. Speciated PM10 data were collected from a monitoring network in HK between July-1998 and Dec-2005. Seven and nine sources were identified by Unmix and PMF10, respectively. Overall, secondary sulfate and vehicle emissions gave the largest contribution to PM10 (27% each), followed by biomass burning/waste incineration (13%) and secondary nitrate (11%). Sources were classified as local and regional based on its seasonal and spatial variations as well as source directional analysis. Regional sources accounted for about 56% of the ambient PM10 mass on an annual basis, and even higher (67%) during winter. Regional contributions also showed an increasing trend, with their annual averaged fraction rising from 53% in 1999 to 64% in 2005. The particulate pollution in HK is therefore sensitive to the regional influence and regional air quality management strategies are crucial in reducing PM level in HK. On the other hand, many species with significant adverse health impacts were produced locally. Local control measures should be strengthened for better protection of public health. Secondary organic carbon (SOC) could be a significant portion of OC in particles. SOC was examined by using PMF-derived source apportionment results and estimated to be sum of OC present in the secondary sources. The annual average SOC in HK was estimated to be 4.1 mugC/m3 while summertime average was 1.8 RgC/m3 and wintertime average was 6.9 ggC/m 3. In comparison with the SOC estimates by the PMF method, the method that uses elemental carbon (EC) as the tracer for primary OC to derive at SOC overestimates by 78-210% for the summer samples and by 9-49% for the winter samples. The overestimation by the EC tracer method was a result of incapability of obtaining a single OC/EC ratio that represented a mixture of primary sources varying in time and space. It was found that SOC and secondary sulfate had their seasonal variation in sync, suggesting common factors that control their formation. The close tracking of SOC and sulfate appears to suggest that in-cloud pathway is also important for SOC formation. Speciated VOCs were obtained in four air quality monitoring stations (AQMSs) in HK from August-2002 to August-2003. Both Unmix and PMF identified five stable sources. Mixed solvents gave the largest contributions ranging from 34% at rural Tap Mun to 52% at urban Central/Western. The wind directional analysis indicates the main source location at the central PRD area. Regional transport accounts for about 19% of the total VOC, while the two local and vehicle-related sources are responsible for 27%. By weighing the abundance and reactivity of each VOC species, mixed solvent use is estimated to be the largest contributor of local ozone, with the contributions ranging from 42% at Tung Chung to 57% at Tap Mun. The next largest is the vehicle exhaust, accounting for about 28% in Yuen Long. Biogenic emission is responsible for nearly 20% of the ozone generation at Tap Mun but this figure is likely underestimated. Distinct secondary inorganic aerosol (SIA) responses are expected to the reduction of different precursors as a result of non-linear chemical reactions involved in its formation. The last part of this thesis work concerns developing a chemical box model to determine the sensitivity of SIA to changes to the emissions of their precursors. The model is composed of three parts. The first part is a time-dependent module to estimate the temporal variation of all species, before and after the emission has been disturbed. The second part is a gas-particle conversion module that partitions the semi-volatile species into the two phases. The last module would then calculate the aerosol forming potential for the entire simulation period. It is estimated that SIA shows the largest response to the reduction of SO2 emission in YL, followed by NH3 and NOx. Significant regional transport of SIA is discovered in YL, limiting the indication of relative effectiveness for controlling different precursors. At the end, future research directions are proposed to better refine and validate the OBM performance for SIA simulation.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Slope tomography based on eikonal solvers and the adjoint-state method
NASA Astrophysics Data System (ADS)
Tavakoli F., B.; Operto, S.; Ribodetti, A.; Virieux, J.
2017-06-01
Velocity macromodel building is a crucial step in the seismic imaging workflow as it provides the necessary background model for migration or full waveform inversion. In this study, we present a new formulation of stereotomography that can handle more efficiently long-offset acquisition, complex geological structures and large-scale data sets. Stereotomography is a slope tomographic method based upon a semi-automatic picking of local coherent events. Each local coherent event, characterized by its two-way traveltime and two slopes in common-shot and common-receiver gathers, is tied to a scatterer or a reflector segment in the subsurface. Ray tracing provides a natural forward engine to compute traveltime and slopes but can suffer from non-uniform ray sampling in presence of complex media and long-offset acquisitions. Moreover, most implementations of stereotomography explicitly build a sensitivity matrix, leading to the resolution of large systems of linear equations, which can be cumbersome when large-scale data sets are considered. Overcoming these issues comes with a new matrix-free formulation of stereotomography: a factored eikonal solver based on the fast sweeping method to compute first-arrival traveltimes and an adjoint-state formulation to compute the gradient of the misfit function. By solving eikonal equation from sources and receivers, we make the computational cost proportional to the number of sources and receivers while it is independent of picked events density in each shot and receiver gather. The model space involves the subsurface velocities and the scatterer coordinates, while the dips of the reflector segments are implicitly represented by the spatial support of the adjoint sources and are updated through the joint localization of nearby scatterers. We present an application on the complex Marmousi model for a towed-streamer acquisition and a realistic distribution of local events. We show that the estimated model, built without any prior knowledge of the velocities, provides a reliable initial model for frequency-domain FWI of long-offset data for a starting frequency of 4 Hz, although some artefacts at the reservoir level result from a deficit of illumination. This formulation of slope tomography provides a computationally efficient alternative to waveform inversion method such as reflection waveform inversion or differential-semblance optimization to build an initial model for pre-stack depth migration and conventional FWI.
Waveform inversion of acoustic waves for explosion yield estimation
Kim, K.; Rodgers, A. J.
2016-07-08
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
Waveform inversion of acoustic waves for explosion yield estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Rodgers, A. J.
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo
2013-02-01
This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
Source apportionment of volatile organic compounds measured near a cold heavy oil production area
NASA Astrophysics Data System (ADS)
Aklilu, Yayne-abeba; Cho, Sunny; Zhang, Qianyu; Taylor, Emily
2018-07-01
This study investigated sources of volatile organic compounds (VOCs) observed during periods of elevated hydrocarbon concentrations adjacent to a cold heavy oil extraction area in Alberta, Canada. Elevated total hydrocarbon (THC) concentrations were observed during the early morning hours and were associated with meteorological conditions indicative of gravitational drainage flows. THC concentrations were higher during the colder months, an occurrence likely promoted by a lower mixing height. On the other hand, other VOCs had higher concentrations in the summer; this is likely due to increased evaporation and atmospheric chemistry during the summer months. Of all investigated VOC compounds, alkanes contributed the greatest in all seasons. A source apportionment method, positive matrix factorization (PMF), was used to identify the potential contribution of various sources to the observed VOC concentrations. A total of five factors were apportioned including Benzene/Hexane, Oil Evaporative, Toluene/Xylene, Acetone and Assorted Local/Regional Air Masses. Three of the five factors (i.e., Benzene/Hexane, Oil Evaporative, and Toluene/Xylene), formed 27% of the reconstructed and unassigned concentration and are likely associated with emissions from heavy oil extraction. The three factors associated with emissions were comparable to the available emission inventory for the area. Potential sources include solution gas venting, combustion exhaust and fugitive emissions from extraction process additives. The remaining two factors (i.e., Acetone and Assorted Local/Regional Air Mass), comprised 49% of the reconstructed and unassigned concentration and contain key VOCs associated with atmospheric chemistry or the local/regional air mass such as acetone, carbonyl sulphide, Freon-11 and butane.
3D localization of electrophysiology catheters from a single x-ray cone-beam projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert, Normand, E-mail: normand.robert@sri.utoronto.ca; Polack, George G.; Sethi, Benu
2015-10-15
Purpose: X-ray images allow the visualization of percutaneous devices such as catheters in real time but inherently lack depth information. The provision of 3D localization of these devices from cone beam x-ray projections would be advantageous for interventions such as electrophysiology (EP), whereby the operator needs to return a device to the same anatomical locations during the procedure. A method to achieve real-time 3D single view localization (SVL) of an object of known geometry from a single x-ray image is presented. SVL exploits the change in the magnification of an object as its distance from the x-ray source is varied.more » The x-ray projection of an object of interest is compared to a synthetic x-ray projection of a model of said object as its pose is varied. Methods: SVL was tested with a 3 mm spherical marker and an electrophysiology catheter. The effect of x-ray acquisition parameters on SVL was investigated. An independent reference localization method was developed to compare results when imaging a catheter translated via a computer controlled three-axes stage. SVL was also performed on clinical fluoroscopy image sequences. A commercial navigation system was used in some clinical image sequences for comparison. Results: SVL estimates exhibited little change as x-ray acquisition parameters were varied. The reproducibility of catheter position estimates in phantoms denoted by the standard deviations, (σ{sub x}, σ{sub y}, σ{sub z}) = (0.099 mm, 0.093 mm, 2.2 mm), where x and y are parallel to the detector plane and z is the distance from the x-ray source. Position estimates (x, y, z) exhibited a 4% systematic error (underestimation) when compared to the reference method. The authors demonstrated that EP catheters can be tracked in clinical fluoroscopic images. Conclusions: It has been shown that EP catheters can be localized in real time in phantoms and clinical images at fluoroscopic exposure rates. Further work is required to characterize performance in clinical images as well as the sensitivity to clinical image quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
47 CFR 11.18 - EAS Designations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...
47 CFR 11.18 - EAS Designations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...
The effect of using genealogy-based haplotypes for genomic prediction
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971
Castrucci, Brian C; Rhoades, Elizabeth K; Leider, Jonathon P; Hearne, Shelley
2015-01-01
The epidemiologic shift in the leading causes of mortality from infectious disease to chronic disease has created significant challenges for public health surveillance at the local level. We describe how the largest US city health departments identify and use data to inform their work and we identify the data and information that local public health leaders have specified as being necessary to help better address specific problems in their communities. We used a mixed-methods design that included key informant interviews, as well as a smaller embedded survey to quantify organizational characteristics related to data capacity. Interview data were independently coded and analyzed for major themes around data needs, barriers, and achievements. Forty-five public health leaders from each of 3 specific positions-local health official, chief of policy, and chief science or medical officer-in 16 large urban health departments. Public health leaders in large urban local health departments reported that timely data and data on chronic disease that are available at smaller geographical units are difficult to obtain without additional resources. Despite departments' successes in creating ad hoc sources of local data to effect policy change, all participants described the need for more timely data that could be geocoded at a neighborhood or census tract level to more effectively target their resources. Electronic health records, claims data, and hospital discharge data were identified as sources of data that could be used to augment the data currently available to local public health leaders. Monitoring the status of community health indicators and using the information to identify priority issues are core functions of all public health departments. Public health professionals must have access to timely "hyperlocal" data to detect trends, allocate resources to areas of greatest priority, and measure the effectiveness of interventions. Although innovations in the largest local health departments in large urban areas have established some methods to obtain local data on chronic disease, leaders recognize that there is an urgent need for more timely and more geographically specific data at the neighborhood or census tract level to efficiently and effectively address the most pressing problems in public health.
Using Network Theory to Understand Seismic Noise in Dense Arrays
NASA Astrophysics Data System (ADS)
Riahi, N.; Gerstoft, P.
2015-12-01
Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.
Automated detection of extended sources in radio maps: progress from the SCORPIO survey
NASA Astrophysics Data System (ADS)
Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.
2016-08-01
Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
Shepherd, Jason [Albuquerque, NM; Mitchell, Scott A [Albuquerque, NM; Jankovich, Steven R [Anaheim, CA; Benzley, Steven E [Provo, UT
2007-05-15
The present invention provides a meshing method, called grafting, that lifts the prior art constraint on abutting surfaces, including surfaces that are linking, source/target, or other types of surfaces of the trunk volume. The grafting method locally modifies the structured mesh of the linking surfaces allowing the mesh to conform to additional surface features. Thus, the grafting method can provide a transition between multiple sweep directions extending sweeping algorithms to 23/4-D solids. The method is also suitable for use with non-sweepable volumes; the method provides a transition between meshes generated by methods other than sweeping as well.
USDA-ARS?s Scientific Manuscript database
The approach of anaerobic soil disinfestation (ASD) in Florida, a method for pre-plant soil treatment, consists of combining the application of the molasses (C source) with the application of composted poultry litter (CPL) as an organic amendment. However, CPL is not always available locally and is...
K-12 Non-Instructional Service Consolidation: Spending Changes and Scale Economies
ERIC Educational Resources Information Center
DeLuca, Thomas A.
2013-01-01
Educational policy makers (e.g., legislators, state and local school boards) continue to promote inter-district service consolidation as one method to reduce operating expenditures citing economies of scale as the source of any savings. This study uses survey data to identify the extent of non-instructional service consolidation in Michigan, with…
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.
Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin
2018-04-25
Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Applying local binary patterns in image clustering problems
NASA Astrophysics Data System (ADS)
Skorokhod, Nikolai N.; Elizarov, Alexey I.
2017-11-01
Due to the fact that the cloudiness plays a critical role in the Earth radiative balance, the study of the distribution of different types of clouds and their movements is relevant. The main sources of such information are artificial satellites that provide data in the form of images. The most commonly used method of solving tasks of processing and classification of images of clouds is based on the description of texture features. The use of a set of local binary patterns is proposed to describe the texture image.
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin
Casto, Daniel W.
2001-01-01
Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.
Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.
Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G
2018-01-01
Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.
NASA Technical Reports Server (NTRS)
Curry, Mark A (Inventor); Senibi, Simon D (Inventor); Banks, David L (Inventor)
2010-01-01
A system and method for detecting damage to a structure is provided. The system includes a voltage source and at least one capacitor formed as a layer within the structure and responsive to the voltage source. The system also includes at least one sensor responsive to the capacitor to sense a voltage of the capacitor. A controller responsive to the sensor determines if damage to the structure has occurred based on the variance of the voltage of the capacitor from a known reference value. A method for sensing damage to a structure involves providing a plurality of capacitors and a controller, and coupling the capacitors to at least one surface of the structure. A voltage of the capacitors is sensed using the controller, and the controller calculates a change in the voltage of the capacitors. The method can include signaling a display system if a change in the voltage occurs.
Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc
2013-01-01
We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292
Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2018-01-01
Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969
DOT National Transportation Integrated Search
2014-10-01
Several Virginia localities have used local funding and financing sources to build new roads or complete major street : improvement projects when state and/or federal funding was not available. Many others have combined local funding sources : with s...
Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.
Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
Towards a street-level pollen concentration and exposure forecast
NASA Astrophysics Data System (ADS)
van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard
2015-04-01
Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.
Detailed Aggregate Resources Study, Dry Lake Valley, Nevada.
1981-05-29
LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE I COLLECTED WITHIN A FEW MILES OF CORRESPONDING LEDGE-ROCK SOURCES) SUPPLIED FINE MENS...COMPRESSIVE AND TENSILE STh LEDGE-ROCK SOURCES SUPPLIED COARSE AGGREGATES; LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE COLLECTED WITHIN A FEW
Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A
2012-07-01
Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Unidentified Gamma-Ray Sources: Hunting Gamma-Ray Blazars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massaro, F.; D'Abrusco, R.; Tosti, G.
2012-04-02
One of the main scientific objectives of the ongoing Fermi mission is unveiling the nature of the unidentified {gamma}-ray sources (UGSs). Despite the large improvements of Fermi in the localization of {gamma}-ray sources with respect to the past {gamma}-ray missions, about one third of the Fermi-detected objects are still not associated to low energy counterparts. Recently, using the Wide-field Infrared Survey Explorer (WISE) survey, we discovered that blazars, the rarest class of Active Galactic Nuclei and the largest population of {gamma}-ray sources, can be recognized and separated from other extragalactic sources on the basis of their infrared (IR) colors. Basedmore » on this result, we designed an association method for the {gamma}-ray sources to recognize if there is a blazar candidate within the positional uncertainty region of a generic {gamma}-ray source. With this new IR diagnostic tool, we searched for {gamma}-ray blazar candidates associated to the UGS sample of the second Fermi {gamma}-ray catalog (2FGL). We found that our method associates at least one {gamma}-ray blazar candidate as a counterpart each of 156 out of 313 UGSs analyzed. These new low-energy candidates have the same IR properties as the blazars associated to {gamma}-ray sources in the 2FGL catalog.« less
UNIDENTIFIED {gamma}-RAY SOURCES: HUNTING {gamma}-RAY BLAZARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massaro, F.; Ajello, M.; D'Abrusco, R.
2012-06-10
One of the main scientific objectives of the ongoing Fermi mission is unveiling the nature of unidentified {gamma}-ray sources (UGSs). Despite the major improvements of Fermi in the localization of {gamma}-ray sources with respect to the past {gamma}-ray missions, about one-third of the Fermi-detected objects are still not associated with low-energy counterparts. Recently, using the Wide-field Infrared Survey Explorer survey, we discovered that blazars, the rarest class of active galactic nuclei and the largest population of {gamma}-ray sources, can be recognized and separated from other extragalactic sources on the basis of their infrared (IR) colors. Based on this result, wemore » designed an association method for the {gamma}-ray sources to recognize if there is a blazar candidate within the positional uncertainty region of a generic {gamma}-ray source. With this new IR diagnostic tool, we searched for {gamma}-ray blazar candidates associated with the UGS sample of the second Fermi {gamma}-ray LAT catalog (2FGL). We found that our method associates at least one {gamma}-ray blazar candidate as a counterpart to each of 156 out of 313 UGSs analyzed. These new low-energy candidates have the same IR properties as the blazars associated with {gamma}-ray sources in the 2FGL catalog.« less
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Worthmann, Brian M; Song, H C; Dowling, David R
2017-01-01
Remote source localization in the shallow ocean at frequencies significantly above 1 kHz is virtually impossible for conventional array signal processing techniques due to environmental mismatch. A recently proposed technique called frequency-difference matched field processing (Δf-MFP) [Worthmann, Song, and Dowling (2015). J. Acoust. Soc. Am. 138(6), 3549-3562] overcomes imperfect environmental knowledge by shifting the signal processing to frequencies below the signal's band through the use of a quadratic product of frequency-domain signal amplitudes called the autoproduct. This paper extends these prior Δf-MFP results to various adaptive MFP processors found in the literature, with particular emphasis on minimum variance distortionless response, multiple constraint method, multiple signal classification, and matched mode processing at signal-to-noise ratios (SNRs) from -20 to +20 dB. Using measurements from the 2011 Kauai Acoustic Communications Multiple University Research Initiative experiment, the localization performance of these techniques is analyzed and compared to Bartlett Δf-MFP. The results show that a source broadcasting a frequency sweep from 11.2 to 26.2 kHz through a 106 -m-deep sound channel over a distance of 3 km and recorded on a 16 element sparse vertical array can be localized using Δf-MFP techniques within average range and depth errors of 200 and 10 m, respectively, at SNRs down to 0 dB.
Measuring cosmic shear and birefringence using resolved radio sources
NASA Astrophysics Data System (ADS)
Whittaker, Lee; Battye, Richard A.; Brown, Michael L.
2018-02-01
We develop a new method of extracting simultaneous measurements of weak lensing shear and a local rotation of the plane of polarization using observations of resolved radio sources. The basis of the method is an assumption that the direction of the polarization is statistically linked with that of the gradient of the total intensity field. Using a number of sources spread over the sky, this method will allow constraints to be placed on cosmic shear and birefringence, and it can be applied to any resolved radio sources for which such a correlation exists. Assuming that the rotation and shear are constant across the source, we use this relationship to construct a quadratic estimator and investigate its properties using simulated observations. We develop a calibration scheme using simulations based on the observed images to mitigate a bias which occurs in the presence of measurement errors and an astrophysical scatter on the polarization. The method is applied directly to archival data of radio galaxies where we measure a mean rotation signal of $\\omega=-2.02^{\\circ}\\pm0.75^{\\circ}$ and an average shear compatible with zero using 30 reliable sources. This level of constraint on an overall rotation is comparable with current leading constraints from CMB experiments and is expected to increase by at least an order of magnitude with future high precision radio surveys, such as those performed by the SKA. We also measure the shear and rotation two-point correlation functions and estimate the number of sources required to detect shear and rotation correlations in future surveys.
Non Locality Proofs in Quantum Mechanics Analyzed by Ordinary Mathematical Logic
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe
2014-10-01
The so-called non-locality theorems aim to show that Quantum Mechanics is not consistent with the Locality Principle. Their proofs require, besides the standard postulates of Quantum Theory, further conditions, as for instance the Criterion of Reality, which cannot be formulated in the language of Standard Quantum Theory; this difficulty makes the proofs not verifiable according to usual logico-mathematical methods, and therefore it is a source of the controversial debate about the real implications of these theorems. The present work addresses this difficulty for Bell-type and Stapp's arguments of non-locality. We supplement the formalism of Quantum Mechanics with formal statements inferred from the further conditions in the two different cases. Then an analysis of the two arguments is performed according to ordinary mathematical logic.
Miller, Brian S; Calderan, Susannah; Gillespie, Douglas; Weatherup, Graham; Leaper, Russell; Collins, Kym; Double, Michael C
2016-03-01
Directional frequency analysis and recording (DIFAR) sonobuoys can allow real-time acoustic localization of baleen whales for underwater tracking and remote sensing, but limited availability of hardware and software has prevented wider usage. These software limitations were addressed by developing a module in the open-source software PAMGuard. A case study is presented demonstrating that this software provides greater efficiency and accessibility than previous methods for detecting, localizing, and tracking Antarctic blue whales in real time. Additionally, this software can easily be extended to track other low and mid frequency sounds including those from other cetaceans, pinnipeds, icebergs, shipping, and seismic airguns.
Non-contact local temperature measurement inside an object using an infrared point detector
NASA Astrophysics Data System (ADS)
Hisaka, Masaki
2017-04-01
Local temperature measurement in deep areas of objects is an important technique in biomedical measurement. We have investigated a non-contact method for measuring temperature inside an object using a point detector for infrared (IR) light. An IR point detector with a pinhole was constructed and the radiant IR light emitted from the local interior of the object is photodetected only at the position of pinhole located in imaging relation. We measured the thermal structure of the filament inside the miniature bulb using the IR point detector, and investigated the temperature dependence at approximately human body temperature using a glass plate positioned in front of the heat source.
Early Growth of Black Walnut Trees From Twenty Seed Sources
Calvin F. Bey; John R. Toliver; Paul L. Roth
1971-01-01
Early results of a black walnut cornseed source study conducted in southern Illinois suggest that seed should be collected from local or south-of-local areas. Trees from southern sources grew faster and longer than trees from northern sources. Trees from southern sources flushed slightly earlier and held their leaves longer than trees from northern sources. For the...
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
Huang, Yeqi; Deng, Tao; Li, Zhenning; Wang, Nan; Yin, Chanqin; Wang, Shiqiang; Fan, Shaojia
2018-09-01
This article uses the WRF-CMAQ model to systematically study the source apportionment of PM 2.5 under typical meteorological conditions in the dry season (November 2010) in the Pearl River Delta (PRD). According to the geographical location and the relative magnitude of pollutant emission, Guangdong Province is divided into eight subdomains for source apportionment study. The Brute-Force Method (BFM) method was implemented to simulate the contribution from different regions to the PM 2.5 pollution in the PRD. Results show that the industrial sources accounted for the largest proportion. For emission species, the total amount of NO x and VOC in Guangdong Province, and NH 3 and VOC in Hunan Province are relatively larger. In Guangdong Province, the emission of SO 2 , NO x and VOC in the PRD are relatively larger, and the NH 3 emissions are higher outside the PRD. In northerly-controlled episodes, model simulations demonstrate that local emissions are important for PM 2.5 pollution in Guangzhou and Foshan. Meanwhile, emissions from Dongguan and Huizhou (DH), and out of Guangdong Province (SW) are important contributors for PM 2.5 pollution in Guangzhou. For PM 2.5 pollution in Foshan, emissions in Guangzhou and DH are the major contributors. In addition, high contribution ratio from DH only occurs in severe pollution periods. In southerly-controlled episode, contribution from the southern PRD increases. Local emissions and emissions from Shenzhen, DH, Zhuhai-Jiangmen-Zhongshan (ZJZ) are the major contributors. Regional contribution to the chemical compositions of PM 2.5 indicates that the sources of chemical components are similar to those of PM 2.5 . In particular, SO 4 2- is mainly sourced from emissions out of Guangdong Province, while the NO 3- and NH 4+ are more linked to agricultural emissions. Copyright © 2018 Elsevier B.V. All rights reserved.
Earthquake location in island arcs
Engdahl, E.R.; Dewey, J.W.; Fujita, K.
1982-01-01
A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high-velocity lithospheric slab. In application, JHD has the practical advantage that it does not require the specification of a theoretical velocity model for the slab. Considering earthquakes within a 260 km long by 60 km wide section of the Aleutian main thrust zone, our results suggest that the theoretical velocity structure of the slab is presently not sufficiently well known that accurate locations can be obtained independently of locally recorded data. Using a locally recorded earthquake as a calibration event, JHD gave excellent results over the entire section of the main thrust zone here studied, without showing a strong effect that might be attributed to spatially varying source-station anomalies. We also calibrated the ray-tracing method using locally recorded data and obtained results generally similar to those obtained by JHD. ?? 1982.
Paces, James B.; Wurster, Frederic C.
2014-01-01
Near-surface physical and chemical process can strongly affect dissolved-ion concentrations and stable isotope compositions of water in wetland settings, especially under arid climate conditions. In contrast, heavy radiogenic isotopes of strontium (87Sr/86Sr) and uranium (234U/238U) remain largely unaffected and can be used to help identify unique signatures from different sources and quantify end-member mixing that would otherwise be difficult to determine. The utility of combined Sr and U isotopes are demonstrated in this study of wetland habitats on the Pahranagat National Wildlife Refuge, which depend on supply from large-volume springs north of the Refuge, and from small-volume springs and seeps within the Refuge. Water budgets from these sources have not been quantified previously. Evaporation, transpiration, seasonally variable surface flow, and water management practices complicate the use of conventional methods for determining source contributions and mixing relations. In contrast, 87Sr/86Sr and 234U/238U remain unfractionated under these conditions, and compositions at a given site remain constant. Differences in Sr- and U-isotopic signatures between individual sites can be related by simple two- or three-component mixing models. Results indicate that surface flow constituting the Refuge’s irrigation source consists of a 65:25:10 mixture of water from two distinct regionally sourced carbonate aquifer springs, and groundwater from locally sourced volcanic aquifers. Within the Refuge, contributions from the irrigation source and local groundwater are readily determined and depend on proximity to those sources as well as water management practices.
NASA Astrophysics Data System (ADS)
Paces, James B.; Wurster, Frederic C.
2014-09-01
Near-surface physical and chemical process can strongly affect dissolved-ion concentrations and stable-isotope compositions of water in wetland settings, especially under arid climate conditions. In contrast, heavy radiogenic isotopes of strontium (87Sr/86Sr) and uranium (234U/238U) remain largely unaffected and can be used to help identify unique signatures from different sources and quantify end-member mixing that would otherwise be difficult to determine. The utility of combined Sr and U isotopes are demonstrated in this study of wetland habitats on the Pahranagat National Wildlife Refuge, which depend on supply from large-volume springs north of the Refuge, and from small-volume springs and seeps within the Refuge. Water budgets from these sources have not been quantified previously. Evaporation, transpiration, seasonally variable surface flow, and water management practices complicate the use of conventional methods for determining source contributions and mixing relations. In contrast, 87Sr/86Sr and 234U/238U remain unfractionated under these conditions, and compositions at a given site remain constant. Differences in Sr- and U-isotopic signatures between individual sites can be related by simple two- or three-component mixing models. Results indicate that surface flow constituting the Refuge's irrigation source consists of a 65:25:10 mixture of water from two distinct regionally sourced carbonate-aquifer springs, and groundwater from locally sourced volcanic aquifers. Within the Refuge, contributions from the irrigation source and local groundwater are readily determined and depend on proximity to those sources as well as water management practices.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
Estoppey, Nicolas; Schopfer, Adrien; Fong, Camille; Delémont, Olivier; De Alencastro, Luiz F; Esseiva, Pierre
2016-12-01
This study firstly aims to assess the field performances of low density polyethylene (LDPE) and silicone rubber (SR) samplers for the monitoring of polychlorinated biphenyls (PCBs) in water regarding the uptake, the sampling rate (R S ) estimated by using performance reference compounds (PRCs) and the time-weighted average (TWA) concentrations. The second aim is to evaluate the efficiency of these samplers to investigate PCB sources (localization and imputation steps) using methods with and without PRCs to correct for the impact of water velocity on the uptake. Samplers spiked with PRCs were deployed in the outfalls of two PCB sources and at 8 river sites situated upstream and downstream of the outfalls. After 6weeks, the uptake of PCBs in the linear phase was equivalent in LDPE and SR but 5 times lower in LDPE for PCBs approaching equilibrium. PRC-based R S and water velocity (0.08 to 1.21ms -1 ) were well correlated in river (LDPE: R 2 =0.91, SR: R 2 =0.96) but not in outfalls (higher turbulences and potential release of PRCs to air). TWA concentrations obtained with SR were slightly higher than those obtained with LDPE (factor 1.4 to 2.6 in river) likely because of uncertainty in sampler-water partition coefficient values. Concentrations obtained through filtration and extraction of water samples (203L) were 1.6 and 5.1 times higher than TWA concentrations obtained with SR and LDPE samplers, respectively. PCB sources could efficiently be localized when PRCs were used (increases of PCB loads in river) but the impact of high differences of water velocity was overcorrected (leading sometimes to false positives and negatives). Increases of PCB loads in the river could not be entirely imputed to the investigated sources (underestimation of PCBs contributing to the load increases). A method without PRCs (relationship between uptake and water velocity) appeared to be a good complementary method for LDPE. Copyright © 2016. Published by Elsevier B.V.
Acoustic source localization in mixed field using spherical microphone arrays
NASA Astrophysics Data System (ADS)
Huang, Qinghua; Wang, Tong
2014-12-01
Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.
On the Vertical Distribution of Local and Remote Sources of Water for Precipitation
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.
2001-01-01
The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
Sequence independent amplification of DNA
Bohlander, S.K.
1998-03-24
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example, the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei. 25 figs.
Sequence independent amplification of DNA
Bohlander, Stefan K.
1998-01-01
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
Measurement of the Local Food Environment: A Comparison of Existing Data Sources
Bader, Michael D. M.; Ailshire, Jennifer A.; Morenoff, Jeffrey D.; House, James S.
2010-01-01
Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses—drugstores, liquor stores, bars, convenience stores, restaurants, and grocers—located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design. PMID:20123688
2012-01-01
Background Most studies on the local food environment have used secondary sources to describe the food environment, such as government food registries or commercial listings (e.g., Reference USA). Most of the studies exploring evidence for validity of secondary retail food data have used on-site verification and have not conducted analysis by data source (e.g., sensitivity of Reference USA) or by food outlet type (e.g., sensitivity of Reference USA for convenience stores). Few studies have explored the food environment in American Indian communities. To advance the science on measuring the food environment, we conducted direct, on-site observations of a wide range of food outlets in multiple American Indian communities, without a list guiding the field observations, and then compared our findings to several types of secondary data. Methods Food outlets located within seven State Designated Tribal Statistical Areas in North Carolina (NC) were gathered from online Yellow Pages, Reference USA, Dun & Bradstreet, local health departments, and the NC Department of Agriculture and Consumer Services. All TIGER/Line 2009 roads (>1,500 miles) were driven in six of the more rural tribal areas and, for the largest tribe, all roads in two of its cities were driven. Sensitivity, positive predictive value, concordance, and kappa statistics were calculated to compare secondary data sources to primary data. Results 699 food outlets were identified during primary data collection. Match rate for primary data and secondary data differed by type of food outlet observed, with the highest match rates found for grocery stores (97%), general merchandise stores (96%), and restaurants (91%). Reference USA exhibited almost perfect sensitivity (0.89). Local health department data had substantial sensitivity (0.66) and was almost perfect when focusing only on restaurants (0.91). Positive predictive value was substantial for Reference USA (0.67) and moderate for local health department data (0.49). Evidence for validity was comparatively lower for Dun & Bradstreet, online Yellow Pages, and the NC Department of Agriculture. Conclusions Secondary data sources both over- and under-represented the food environment; they were particularly problematic for identifying convenience stores and specialty markets. More attention is needed to improve the validity of existing data sources, especially for rural local food environments. PMID:23173781
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
Emberson, Lauren L.; Crosswhite, Stephen L.; Goodwin, James R.; Berger, Andrew J.; Aslin, Richard N.
2016-01-01
Abstract. Functional near-infrared spectroscopy (fNIRS) records hemodynamic changes in the cortex arising from neurovascular coupling. However, (noninvasive) fNIRS recordings also record surface vascular signals arising from noncortical sources (e.g., in the skull, skin, dura, and other tissues located between the sensors and the brain). A current and important focus in the fNIRS community is determining how to remove these noncortical vascular signals to reduce noise and to prevent researchers from erroneously attributing responses to cortical sources. The current study is the first to test a popular method for removing signals from the surface vasculature (removing short, 1 cm, channel recordings from long, 3 cm, channel recordings) in human infants, a population frequently studied using fNIRS. We find evidence that this method does remove surface vasculature signals and indicates the presence of both local and global surface vasculature signals. However, we do not find that the removal of this information changes the statistical inferences drawn from the data. This latter result not only questions the importance of removing surface vasculature responses for empiricists employing this method, but also calls for future research using other tasks (e.g., ones with a weaker initial result) with this population and possibly additional methods for removing signals arising from the surface vasculature in infants. PMID:27158631
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
NASA Astrophysics Data System (ADS)
Ogulei, David; Hopke, Philip K.; Zhou, Liming; Patrick Pancras, J.; Nair, Narayanan; Ondov, John M.
Several multivariate data analysis methods have been applied to a combination of particle size and composition measurements made at the Baltimore Supersite. Partial least squares (PLS) was used to investigate the relationship (linearity) between number concentrations and the measured PM2.5 mass concentrations of chemical species. The data were obtained at the Ponca Street site and consisted of six days' measurements: 6, 7, 8, 18, 19 July, and 21 August 2002. The PLS analysis showed that the covariance between the data could be explained by 10 latent variables (LVs), but only the first four of these were sufficient to establish the linear relationship between the two data sets. More LVs could not make the model better. The four LVs were found to better explain the covariance between the large sized particles and the chemical species. A bilinear receptor model, PMF2, was then used to simultaneously analyze the size distribution and chemical composition data sets. The resolved sources were identified using information from number and mass contributions from each source (source profiles) as well as meteorological data. Twelve sources were identified: oil-fired power plant emissions, secondary nitrate I, local gasoline traffic, coal-fired power plant, secondary nitrate II, secondary sulfate, diesel emissions/bus maintenance, Quebec wildfire episode, nucleation, incinerator, airborne soil/road-way dust, and steel plant emissions. Local sources were mostly characterized by bi-modal number distributions. Regional sources were characterized by transport mode particles (0.2- 0.5μm).
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Cultural knowledge and local vulnerability in African American communities
NASA Astrophysics Data System (ADS)
Miller Hesed, Christine D.; Paolisso, Michael
2015-07-01
Policymakers need to know what factors are most important in determining local vulnerability to facilitate effective adaptation to climate change. Quantitative vulnerability indices are helpful in this endeavour but are limited in their ability to capture subtle yet important aspects of vulnerability such as social networks, knowledge and access to resources. Working with three African American communities on Maryland’s Eastern Shore, we systematically elicit local cultural knowledge on climate change and connect it with a scientific vulnerability framework. The results of this study show that: a given social-ecological factor can substantially differ in the way in which it affects local vulnerability, even among communities with similar demographics and climate-related risks; and social and political isolation inhibits access to sources of adaptive capacity, thereby exacerbating local vulnerability. These results show that employing methods for analysing cultural knowledge can yield new insights to complement those generated by quantitative vulnerability indices.
Staff - David L. LePain | Alaska Division of Geological & Geophysical
geothermal energy sources for local use in Alaska: Summary of available information: Alaska Division of fuel and geothermal energy sources for local use in Alaska: Summary of available information: Alaska , J.G., Fossil fuel and geothermal energy sources for local use in Alaska: Summary of available
NASA Astrophysics Data System (ADS)
Yuan, Zibing; Yadav, Varun; Turner, Jay R.; Louie, Peter K. K.; Lau, Alexis Kai Hon
2013-09-01
Despite extensive emission control measures targeting motor vehicles and to a lesser extent other sources, annual-average PM10 mass concentrations in Hong Kong have remained relatively constant for the past several years and for some air quality metrics, such as the frequency of poor visibility days, conditions have degraded. The underlying drivers for these long-term trends were examined by performing source apportionment on eleven years (1998-2008) of data for seven monitoring sites in the Hong Kong PM10 chemical speciation network. Nine factors were resolved using Positive Matrix Factorization. These factors were assigned to emission source categories that were classified as local (operationally defined as within the Hong Kong Special Administrative Region) or non-local based on temporal and spatial patterns in the source contribution estimates. This data-driven analysis provides strong evidence that local controls on motor vehicle emissions have been effective in reducing motor vehicle-related ambient PM10 burdens with annual-average contributions at neighborhood- and larger-scale monitoring stations decreasing by ˜6 μg m-3 over the eleven year period. However, this improvement has been offset by an increase in annual-average contributions from non-local contributions, especially secondary sulfate and nitrate, of ˜8 μg m-3 over the same time period. As a result, non-local source contributions to urban-scale PM10 have increased from 58% in 1998 to 70% in 2008. Most of the motor vehicle-related decrease and non-local source driven increase occurred over the period 1998-2004 with more modest changes thereafter. Non-local contributions increased most dramatically for secondary sulfate and secondary nitrate factors and thus combustion-related control strategies, including but not limited to power plants, are needed for sources located in the Pearl River Delta and more distant regions to improve air quality conditions in Hong Kong. PMF-resolved source contribution estimates were also used to examine differential contributions of emission source categories during high PM episodes compared to study-average behavior. While contributions from all source categories increased to some extent on high PM days, the increases were disproportionately high for the non-local sources. Thus, controls on emission sources located outside the Hong Kong Special Administrative Region will be needed to effectively decrease the frequency and severity of high PM episodes.
Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe
NASA Technical Reports Server (NTRS)
Isaacson, Jeffrey A.; Canizares, Claude R.
1989-01-01
Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.
Transmission network of the 2014-2015 Ebola epidemic in Sierra Leone.
Yang, Wan; Zhang, Wenyi; Kargbo, David; Yang, Ruifu; Chen, Yong; Chen, Zeliang; Kamara, Abdul; Kargbo, Brima; Kandula, Sasikiran; Karspeck, Alicia; Liu, Chao; Shaman, Jeffrey
2015-11-06
Understanding the growth and spatial expansion of (re)emerging infectious disease outbreaks, such as Ebola and avian influenza, is critical for the effective planning of control measures; however, such efforts are often compromised by data insufficiencies and observational errors. Here, we develop a spatial-temporal inference methodology using a modified network model in conjunction with the ensemble adjustment Kalman filter, a Bayesian inference method equipped to handle observational errors. The combined method is capable of revealing the spatial-temporal progression of infectious disease, while requiring only limited, readily compiled data. We use this method to reconstruct the transmission network of the 2014-2015 Ebola epidemic in Sierra Leone and identify source and sink regions. Our inference suggests that, in Sierra Leone, transmission within the network introduced Ebola to neighbouring districts and initiated self-sustaining local epidemics; two of the more populous and connected districts, Kenema and Port Loko, facilitated two independent transmission pathways. Epidemic intensity differed by district, was highly correlated with population size (r = 0.76, p = 0.0015) and a critical window of opportunity for containing local Ebola epidemics at the source (ca one month) existed. This novel methodology can be used to help identify and contain the spatial expansion of future (re)emerging infectious disease outbreaks. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Forte, Paulo M. F.; Felgueiras, P. E. R.; Ferreira, Flávio P.; Sousa, M. A.; Nunes-Pereira, Eduardo J.; Bret, Boris P. J.; Belsley, Michael S.
2017-01-01
An automatic optical inspection system for detecting local defects on specular surfaces is presented. The system uses an image display to produce a sequence of structured diffuse illumination patterns and a digital camera to acquire the corresponding sequence of images. An image enhancement algorithm, which measures the local intensity variations between bright- and dark-field illumination conditions, yields a final image in which the defects are revealed with a high contrast. Subsequently, an image segmentation algorithm, which compares statistically the enhanced image of the inspected surface with the corresponding image for a defect-free template, allows separating defects from non-defects with an adjusting decision threshold. The method can be applied to shiny surfaces of any material including metal, plastic and glass. The described method was tested on the plastic surface of a car dashboard system. We were able to detect not only scratches but also dust and fingerprints. In our experiment we observed a detection contrast increase from about 40%, when using an extended light source, to more than 90% when using a structured light source. The presented method is simple, robust and can be carried out with short cycle times, making it appropriate for applications in industrial environments.
Cataloging tremor at Kilauea Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Thelen, W. A.; Wech, A.
2013-12-01
Tremor is a ubiquitous seismic feature on Kilauea volcano, which emanates from at least three distinct sources. At depth, intermittent tremor and earthquakes thought to be associated with the underlying plumbing system of Kilauea (Aki and Koyanagi, 1981) occurs approximately 40 km below and 40 km SW of the summit. At the summit of the volcano, nearly continuous tremor is recorded close to a persistently degassing lava lake, which has been present since 2008. Much of this tremor is correlated with spattering at the lake surface, but tremor also occurs in the absence of spattering, and was observed at the summit of the volcano prior to the appearance of the lava lake, predominately in association with inflation/deflation events. The third known source of tremor is in the area of Pu`u `O`o, a vent that has been active since 1983. The exact source location and depth is poorly constrained for each of these sources. Consistently tracking the occurrence and location of tremor in these areas through time will improve our understanding of the plumbing geometry beneath Kilauea volcano and help identify precursory patterns in tremor leading to changes in eruptive activity. The continuous and emergent nature of tremor precludes the use of traditional earthquake techniques for automatic detection and location of seismicity. We implement the method of Wech and Creager (2008) to both detect and localize tremor seismicity in the three regions described above. The technique uses an envelope cross-correlation method in 5-minute windows that maximizes tremor signal coherency among seismic stations. The catalog is currently being built in near-realtime, with plans to extend the analysis to the past as time and continuous data availability permits. This automated detection and localization method has relatively poor depth constraints due to the construction of the envelope function. Nevertheless, the epicenters distinguish activity among the different source regions and serve as starting points for more sophisticated location techniques using cross-correlation and/or amplitude-based locations. The resulting timelines establish a quantitative baseline of behavior for each source to better understand and forecast Kilauea activity.
NASA Astrophysics Data System (ADS)
Aaronson, Neil L.
This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
2017-10-05
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
Cohen, Michael X
2017-09-27
The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kustas, William P.; Alfieri, Joseph G.; Anderson, Martha C.; Colaizzi, Paul D.; Prueger, John H.; Evett, Steven R.; Neale, Christopher M. U.; French, Andrew N.; Hipps, Lawrence E.; Chávez, José L.; Copeland, Karen S.; Howell, Terry A.
2012-12-01
Application and validation of many thermal remote sensing-based energy balance models involve the use of local meteorological inputs of incoming solar radiation, wind speed and air temperature as well as accurate land surface temperature (LST), vegetation cover and surface flux measurements. For operational applications at large scales, such local information is not routinely available. In addition, the uncertainty in LST estimates can be several degrees due to sensor calibration issues, atmospheric effects and spatial variations in surface emissivity. Time differencing techniques using multi-temporal thermal remote sensing observations have been developed to reduce errors associated with deriving the surface-air temperature gradient, particularly in complex landscapes. The Dual-Temperature-Difference (DTD) method addresses these issues by utilizing the Two-Source Energy Balance (TSEB) model of Norman et al. (1995) [1], and is a relatively simple scheme requiring meteorological input from standard synoptic weather station networks or mesoscale modeling. A comparison of the TSEB and DTD schemes is performed using LST and flux observations from eddy covariance (EC) flux towers and large weighing lysimeters (LYs) in irrigated cotton fields collected during BEAREX08, a large-scale field experiment conducted in the semi-arid climate of the Texas High Plains as described by Evett et al. (2012) [2]. Model output of the energy fluxes (i.e., net radiation, soil heat flux, sensible and latent heat flux) generated with DTD and TSEB using local and remote meteorological observations are compared with EC and LY observations. The DTD method is found to be significantly more robust in flux estimation compared to the TSEB using the remote meteorological observations. However, discrepancies between model and measured fluxes are also found to be significantly affected by the local inputs of LST and vegetation cover and the representativeness of the remote sensing observations with the local flux measurement footprint.
Satellite data based method for general survey of forest insect disturbance in British Columbia
NASA Astrophysics Data System (ADS)
Ranson, J.; Montesano, P.
2008-12-01
Regional forest disturbances caused by insects are important to monitor and quantify because of their influence on local ecosystems and the global carbon cycle. Local damage to forest trees disrupts food supplies and shelter for a variety of organisms. Changes in the global carbon budget, its sources and its sinks affect the way the earth functions as a whole, and has an impact on global climate. Furthermore, the ability to detect nascent outbreaks and monitor the spread of regional infestations helps managers mitigate the damage done by catastrophic insect outbreaks. While detection is needed at a fine scale to support local mitigation efforts, detection at a broad regional scale is important for carbon flux modeling on the landscape scale, and needed to direct the local efforts. This paper presents a method for routinely detecting insect damage to coniferous forests using MODIS vegetation indices, thermal anomalies and land cover. The technique is validated using insect outbreak maps and accounts for fire disturbance effects. The range of damage detected may be used to interpret and quantify possible forest damage by insects.
NASA Astrophysics Data System (ADS)
Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef
2018-05-01
This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.
ERIC Educational Resources Information Center
Allen, Charlie Joe
Using techniques of the reputational method to study community power structure, this report identifies components of power structure in a Tennessee school district, demonstrates that proven methodologies can facilitate educational leaders' reform efforts, and serves as a pilot study for further investigation. Researchers investigated the district…
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Goal oriented soil mapping: applying modern methods supported by local knowledge: A review
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr
2017-04-01
In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006
Alternative transportation funding sources available to Virginia localities.
DOT National Transportation Integrated Search
2006-01-01
In 2003, the Virginia Department of Transportation developed a list of alternative transportation funding sources available to localities in Virginia. Alternative funding sources are defined as those that are not included in the annual interstate, pr...
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
Habboush, Nawar; Hamid, Laith; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The discretization of the brain and the definition of the Laplacian matrix influence the results of methods based on spatial and spatio-temporal smoothness, since the Laplacian operator is used to define the smoothness based on the neighborhood of each grid point. In this paper, the results of low resolution electromagnetic tomography (LORETA) and the spatiotemporal Kalman filter (STKF) are computed using, first, a greymatter source space with the standard definition of the Laplacian matrix and, second, using a whole-brain source space and a modified definition of the Laplacian matrix. Electroencephalographic (EEG) source imaging results of five inter-ictal spikes from a pre-surgical patient with epilepsy are used to validate the two aforementioned approaches. The results using the whole-brain source space and the modified definition of the Laplacian matrix were concentrated in a single source activation, stable, and concordant with the location of the focal cortical dysplasia (FCD) in the patient's brain compared with the results which use a grey-matter grid and the classical definition of the Laplacian matrix. This proof-of-concept study demonstrates a substantial improvement of source localization with both LORETA and STKF and constitutes a basis for further research in a large population of patients with epilepsy.
Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun
2018-06-01
The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Busch, Douglas M.
2012-01-01
As school district revenues are reduced by state allocating agencies, local school district administrators and school boards frequently evaluate alternative sources of possible revenue. One emerging source of revenue that many school districts explore is a local education foundation. Local education foundations are 501(c)(3) nonprofit…
Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.
Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K
2018-03-13
Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Lepper, Paul A; D'Spain, Gerald L
2007-08-01
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.