Sample records for source localization technique

  1. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less

  2. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.

  3. Measurement and modeling of the acoustic field near an underwater vehicle and implications for acoustic source localization.

    PubMed

    Lepper, Paul A; D'Spain, Gerald L

    2007-08-01

    The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.

  4. A new wave front shape-based approach for acoustic source localization in an anisotropic plate without knowing its material properties.

    PubMed

    Sen, Novonil; Kundu, Tribikram

    2018-07-01

    Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. [EEG source localization using LORETA (low resolution electromagnetic tomography)].

    PubMed

    Puskás, Szilvia

    2011-03-30

    Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.

  6. Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor

    NASA Astrophysics Data System (ADS)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.

  7. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  9. Explosion localization via infrasound.

    PubMed

    Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M

    2009-11-01

    Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.

  10. Methods of localization of Lamb wave sources on thin plates

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2015-04-01

    Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  11. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  12. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  13. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  14. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  15. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  16. Fatigue crack localization with near-field acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Zhang, Yunfeng

    2013-04-01

    This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.

  17. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  18. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  19. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  20. Appropriate Technology Sourcebook - For Tools and Techniques That Use Local Skills, Local Resources, and Renewable Sources of Energy. Volume One.

    ERIC Educational Resources Information Center

    Darrow, Ken; Pam, Rick

    Written in non-technical language, this sourcebook identifies plans and books for village and small community technology. It contains reviews of publications from 375 American and foreign sources on agriculture, alternative sources of energy, water supply, health care, housing, and related subjects. Emphasized are small-scale systems using local…

  1. Adaptive Environmental Source Localization and Tracking with Unknown Permittivity and Path Loss Coefficients †

    PubMed Central

    Fidan, Barış; Umay, Ilknur

    2015-01-01

    Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441

  2. A microwave imaging-based 3D localization algorithm for an in-body RF source as in wireless capsule endoscopes.

    PubMed

    Chandra, Rohit; Balasingham, Ilangko

    2015-01-01

    A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.

  3. Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG

    PubMed Central

    McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2008-01-01

    EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626

  4. 3D source localization of interictal spikes in epilepsy patients with MRI lesions

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin

    2006-08-01

    The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.

  5. Localization and spectral isolation of special nuclear material using stochastic image reconstruction

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Clarke, S. D.; Pozzi, S. A.

    2017-01-01

    In this work we present a technique for isolating the gamma-ray and neutron energy spectra from multiple radioactive sources localized in an image. Image reconstruction algorithms for radiation scatter cameras typically focus on improving image quality. However, with scatter cameras being developed for non-proliferation applications, there is a need for not only source localization but also source identification. This work outlines a modified stochastic origin ensembles algorithm that provides localized spectra for all pixels in the image. We demonstrated the technique by performing three experiments with a dual-particle imager that measured various gamma-ray and neutron sources simultaneously. We showed that we could isolate the peaks from 22Na and 137Cs and that the energy resolution is maintained in the isolated spectra. To evaluate the spectral isolation of neutrons, a 252Cf source and a PuBe source were measured simultaneously and the reconstruction showed that the isolated PuBe spectrum had a higher average energy and a greater fraction of neutrons at higher energies than the 252Cf. Finally, spectrum isolation was used for an experiment with weapons grade plutonium, 252Cf, and AmBe. The resulting neutron and gamma-ray spectra showed the expected characteristics that could then be used to identify the sources.

  6. Locality-Aware CTA Clustering For Modern GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ang; Song, Shuaiwen; Liu, Weifeng

    2017-04-08

    In this paper, we proposed a novel clustering technique for tapping into the performance potential of a largely ignored type of locality: inter-CTA locality. We first demonstrated the capability of the existing GPU hardware to exploit such locality, both spatially and temporally, on L1 or L1/Tex unified cache. To verify the potential of this locality, we quantified its existence in a broad spectrum of applications and discussed its sources of origin. Based on these insights, we proposed the concept of CTA-Clustering and its associated software techniques. Finally, We evaluated these techniques on all modern generations of NVIDIA GPU architectures. Themore » experimental results showed that our proposed clustering techniques could significantly improve on-chip cache performance.« less

  7. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  8. Time-distance domain transformation for Acoustic Emission source localization in thin metallic plates.

    PubMed

    Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel

    2016-05-01

    Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  10. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  11. Threshold Voltage Instability in A-Si:H TFTS and the Implications for Flexible Displays and Circuits

    DTIC Science & Technology

    2008-12-01

    and negative gate voltages with and without elevated drain voltages for FDC TFTs. Extending techniques used to localize hot electron degradation...in MOSFETs, experiments in our lab have localized the degradation of a-Si:H to the gate dielectric/a-Si:H channel interface [Shringarpure, et al...saturation, increased drain source current measured with the source and drain reversed indicates localization of ΔVth to the gate dielectric/amorphous

  12. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  13. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  14. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  15. Adaptive frequency-difference matched field processing for high frequency source localization in a noisy shallow ocean.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2017-01-01

    Remote source localization in the shallow ocean at frequencies significantly above 1 kHz is virtually impossible for conventional array signal processing techniques due to environmental mismatch. A recently proposed technique called frequency-difference matched field processing (Δf-MFP) [Worthmann, Song, and Dowling (2015). J. Acoust. Soc. Am. 138(6), 3549-3562] overcomes imperfect environmental knowledge by shifting the signal processing to frequencies below the signal's band through the use of a quadratic product of frequency-domain signal amplitudes called the autoproduct. This paper extends these prior Δf-MFP results to various adaptive MFP processors found in the literature, with particular emphasis on minimum variance distortionless response, multiple constraint method, multiple signal classification, and matched mode processing at signal-to-noise ratios (SNRs) from -20 to +20 dB. Using measurements from the 2011 Kauai Acoustic Communications Multiple University Research Initiative experiment, the localization performance of these techniques is analyzed and compared to Bartlett Δf-MFP. The results show that a source broadcasting a frequency sweep from 11.2 to 26.2 kHz through a 106 -m-deep sound channel over a distance of 3 km and recorded on a 16 element sparse vertical array can be localized using Δf-MFP techniques within average range and depth errors of 200 and 10 m, respectively, at SNRs down to 0 dB.

  16. Techniques for Tracking, Evaluating, and Reporting the Implementation of Nonpoint Source Control Measures - Forestry

    EPA Pesticide Factsheets

    This guidance is intended to assist state, regional, and local environmental professionals in tracking the implementation of best management practices (BMPs) used to control nonpoint source pollution generated by forestry practices.

  17. Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.

    PubMed

    Parr, Andreas; Miesen, Robert; Vossiek, Martin

    2016-06-25

    In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.

  18. Discovery of localized TeV gamma-ray sources and diffuse TeV gamma-ray emission from the galactic plane with Milagro using a new background rejection technique

    NASA Astrophysics Data System (ADS)

    Abdo, Aws Ahmad

    2007-08-01

    Very high energy gamma-rays can be used to probe some of the most powerful astrophysical objects in the universe, such as active galactic nuclei, supernova remnants and pulsar-powered nebulae. The diffuse gamma radiation arising from the interaction of cosmic-ray particles with matter and radiation in the Galaxy is one of the few probes available to study the origin of cosmic- rays. Milagro is a water Cherenkov detector that continuously views the entire overhead sky. The large field-of-view combined with the long observation time makes Milagro the most sensitive instrument available for the study of large, low surface brightness sources such as the diffuse gamma radiation arising from interactions of cosmic radiation with interstellar matter. In this thesis I present a new background rejection technique for the Milagro detector through the development of a new gamma hadron separation variable. The Abdo variable, A 4 , coupled with the weighting analysis technique significantly improves the sensitivity of the Milagro detector. This new analysis technique resulted in the first discoveries in Milagro. Four localized sources of TeV gamma-ray emission have been discovered, three of which are in the Cygnus region of the Galaxy and one closer to the Galactic center. In addition to these localized sources, a diffuse emission of TeV gamma-rays has been discovered from the Cygnus region of the Galaxy as well. However, the TeV gamma-ray flux as measured at ~12 TeV from the Cygnus region exceeds that predicted from a conventional model of cosmic-ray production and propagation. This observation indicates the existence of either hard-spectrum cosmic-ray sources and/or other sources of TeV gamma rays in the region. Other TeV gamma-ray source candidates with post-trial statistical significances of > 4s have also been observed in the Galactic plane.

  19. Fault identification and localization for Ethernet Passive Optical Network using L-band ASE source and various types of fiber Bragg grating

    NASA Astrophysics Data System (ADS)

    Naim, Nani Fadzlina; Bakar, A. Ashrif A.; Ab-Rahman, Mohammad Syuhaimi

    2018-01-01

    This paper presents a centralized and fault localization technique for Ethernet Passive Optical Access Network. This technique employs L-band Amplified Spontaneous Emission (ASE) as the monitoring source and various fiber Bragg Gratings (FBGs) as the fiber's identifier. An FBG with a unique combination of Bragg wavelength, reflectivity and bandwidth is inserted at each distribution fiber. The FBG reflection spectrum will be analyzed using an optical spectrum analyzer (OSA) to monitor the condition of the distribution fiber. Various FBGs reflection spectra is employed to optimize the limited bandwidth of monitoring source, thus allows more fibers to be monitored. Basically, one Bragg wavelength is shared by two distinct FBGs with different reflectivity and bandwidth. The experimental result shows that the system is capable to monitor up to 32 customers with OSNR value of ∼1.2 dB and monitoring power received of -24 dBm. This centralized and simple monitoring technique demonstrates a low power, cost efficient and low bandwidth requirement system.

  20. Petro-mineralogy and geochemistry as tools of provenance analysis on archaeological pottery: Study of Inka Period ceramics from Paria, Bolivia

    NASA Astrophysics Data System (ADS)

    Szilágyi, V.; Gyarmati, J.; Tóth, M.; Taubald, H.; Balla, M.; Kasztovszky, Zs.; Szakmány, Gy.

    2012-07-01

    This paper summarized the results of comprehensive petro-mineralogical and geochemical (archeometrical) investigation of Inka Period ceramics excavated from Inka (A.D. 1438-1535) and Late Intermediate Period (A.D. 1000/1200-1438) sites of the Paria Basin (Dept. Oruro, Bolivia). Applying geological analytical techniques we observed a complex and important archaeological subject of the region and the era, the cultural-economic influence of the conquering Inkas in the provincial region of Paria appearing in the ceramic material. According to our results, continuity and changes of raw material utilization and pottery manufacturing techniques from the Late Intermediate to the Inka Period are characterized by analytical methods. The geological field survey provided efficient basis for the identification of utilized raw material sources. On the one hand, ceramic supply of both eras proved to be based almost entirely on local and near raw material sources. So, imperial handicraft applied local materials but with sophisticated imperial techniques in Paria. On the other hand, Inka Imperial and local-style vessels also show clear differences in their material which suggests that sources and techniques functioned already in the Late Intermediate Period subsisted even after the Inka conquest of the Paria Basin. Based on our geological investigations, pottery supply system of the Paria region proved to be rather complex during the Inka Period.

  1. Towards a street-level pollen concentration and exposure forecast

    NASA Astrophysics Data System (ADS)

    van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard

    2015-04-01

    Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.

  2. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  3. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  4. Germanium layers grown by zone thermal crystallization from a discrete liquid source

    NASA Astrophysics Data System (ADS)

    Yatsenko, A. N.; Chebotarev, S. N.; Lozovskii, V. N.; Mohamed, A. A. A.; Erimeev, G. A.; Goncharova, L. M.; Varnavskaya, A. A.

    2017-11-01

    It is proposed and investigated a method for growing thin uniform germanium layers onto large silicon substrates. The technique uses the hexagonally arranged local sources filled with liquid germanium. Germanium evaporates on very close substrate and in these conditions the residual gases vapor pressure highly reduces. It is shown that to achieve uniformity of the deposited layer better than 97% the critical thickness of the vacuum zone must be equal to l cr = 1.2 mm for a hexagonal arranged system of round local sources with the radius of r = 0.75 mm and the distance between the sources of h = 0.5 mm.

  5. Design and optimization of a brachytherapy robot

    NASA Astrophysics Data System (ADS)

    Meltsner, Michael A.

    Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.

  6. Single source photoplethysmograph transducer for local pulse wave velocity measurement.

    PubMed

    Nabeel, P M; Joseph, Jayaraj; Awasthi, Vartika; Sivaprakasam, Mohanasankar

    2016-08-01

    Cuffless evaluation of arterial blood pressure (BP) using pulse wave velocity (PWV) has received attraction over the years. Local PWV based techniques for cuffless BP measurement has more potential in accurate estimation of BP parameters. In this work, we present the design and experimental validation of a novel single-source Photoplethysmograph (PPG) transducer for arterial blood pulse detection and cycle-to-cycle local PWV measurement. The ability of the transducer to continuously measure local PWV was verified using arterial flow phantom as well as by conducting an in-vivo study on 17 volunteers. The single-source PPG transducer could reliably acquire dual blood pulse waveforms, along small artery sections of length less than 28 mm. The transducer was able to perform repeatable measurements of carotid local PWV on multiple subjects with maximum beat-to-beat variation less than 12%. The correlation between measured carotid local PWV and brachial BP parameters were also investigated during the in-vivo study. Study results prove the potential use of newly proposed single-source PPG transducers in continuous cuffless BP measurement systems.

  7. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  8. An investigation of techniques for the measurement and interpretation of cosmic ray isotopic abundances. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wiedenbeck, M. E.

    1977-01-01

    An instrument, the Caltech High Energy Isotope Spectrometer Telescope was developed to measure isotopic abundances of cosmic ray nuclei by employing an energy loss - residual energy technique. A detailed analysis was made of the mass resolution capabilities of this instrument. A formalism, based on the leaky box model of cosmic ray propagation, was developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. It was shown that the dominant sources of uncertainty in the derived source ratios are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances. These results were applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

  9. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  10. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  11. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  12. Propagation-based phase-contrast tomography for high-resolution lung imaging with laboratory sources

    NASA Astrophysics Data System (ADS)

    Krenkel, Martin; Töpperwien, Mareike; Dullin, Christian; Alves, Frauke; Salditt, Tim

    2016-03-01

    We have performed high-resolution phase-contrast tomography on whole mice with a laboratory setup. Enabled by a high-brilliance liquid-metal-jet source, we show the feasibility of propagation-based phase contrast in local tomography even in the presence of strongly absorbing surrounding tissue as it is the case in small animal imaging of the lung. We demonstrate the technique by reconstructions of the mouse lung for two different fields of view, covering the whole organ, and a zoom to the local finer structure of terminal airways and alveoli. With a resolution of a few micrometers and the wide availability of the technique, studies of larger biological samples at the cellular level become possible.

  13. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  14. Sources and remediation techniques for mercury contaminated soil.

    PubMed

    Xu, Jingying; Bravo, Andrea Garcia; Lagerkvist, Anders; Bertilsson, Stefan; Sjöblom, Rolf; Kumpiene, Jurate

    2015-01-01

    Mercury (Hg) in soils has increased by a factor of 3 to 10 in recent times mainly due to combustion of fossil fuels combined with long-range atmospheric transport processes. Other sources as chlor-alkali plants, gold mining and cement production can also be significant, at least locally. This paper summarizes the natural and anthropogenic sources that have contributed to the increase of Hg concentration in soil and reviews major remediation techniques and their applications to control soil Hg contamination. The focus is on soil washing, stabilisation/solidification, thermal treatment and biological techniques; but also the factors that influence Hg mobilisation in soil and therefore are crucial for evaluating and optimizing remediation techniques are discussed. Further research on bioremediation is encouraged and future study should focus on the implementation of different remediation techniques under field conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Method and system for determining radiation shielding thickness and gamma-ray energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio

    2015-12-15

    A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.

  16. Dorsolateral Frontal Lobe Epilepsy

    PubMed Central

    Lee, Ricky W.; Worrell, Greg A.

    2012-01-01

    Dorsolateral frontal lobe seizures often present as a diagnostic challenge. The diverse semiologies may not produce lateralizing or localizing signs, and can appear bizarre and suggest psychogenic events. Unfortunately, scalp EEG and MRI are often unsatisfactory. It is not uncommon that these traditional diagnostic studies are either unhelpful or even misleading. In some cases SPECT and PET imaging can be an effective tool to identify the origin of seizures. However, these techniques and other emerging techniques all have limitations, and new approaches are needed to improve source localization. PMID:23027094

  17. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  18. Synchronization Tomography: Modeling and Exploring Complex Brain Dynamics

    NASA Astrophysics Data System (ADS)

    Fieseler, Thomas

    2002-03-01

    Phase synchronization (PS) plays an important role both under physiological and pathological conditions. With standard averaging techniques of MEG data, it is difficult to reliably detect cortico-cortical and cortico-muscular PS processes that are not time-locked to an external stimulus. For this reason, novel synchronization analysis techniques were developed and directly applied to MEG signals. Of course, due to the lack of an inverse modeling (i.e. source localization), the spatial resolution of this approach was limited. To detect and localize cerebral PS, we here present the synchronization tomography (ST): For this, we first estimate the cerebral current source density by means of the magnetic field tomography (MFT). We then apply the single-run PS analysis to the current source density in each voxel of the reconstruction space. In this way we study simulated PS, voxel by voxel in order to determine the spatio-temporal resolution of the ST. To this end different generators of ongoing rhythmic cerebral activity are simulated by current dipoles at different locations and directions, which are modeled by slightly detuned chaotic oscillators. MEG signals for these generators are simulated for a spherical head model and a whole-head MEG system. MFT current density solutions are calculated from these simulated signals within a hemispherical source space. We compare the spatial resolution of the ST with that of the MFT. Our results show that adjacent sources which are indistinguishable for the MFT, can nevertheless be separated with the ST, provided they are not strongly phase synchronized. This clearly demonstrates the potential of combining spatial information (i.e. source localization) with temporal information for the anatomical localization of phase synchronization in the human brain.

  19. Electric Field Encephalography as a tool for functional brain research: a modeling study.

    PubMed

    Petrov, Yury; Sridhar, Srinivas

    2013-01-01

    We introduce the notion of Electric Field Encephalography (EFEG) based on measuring electric fields of the brain and demonstrate, using computer modeling, that given the appropriate electric field sensors this technique may have significant advantages over the current EEG technique. Unlike EEG, EFEG can be used to measure brain activity in a contactless and reference-free manner at significant distances from the head surface. Principal component analysis using simulated cortical sources demonstrated that electric field sensors positioned 3 cm away from the scalp and characterized by the same signal-to-noise ratio as EEG sensors provided the same number of uncorrelated signals as scalp EEG. When positioned on the scalp, EFEG sensors provided 2-3 times more uncorrelated signals. This significant increase in the number of uncorrelated signals can be used for more accurate assessment of brain states for non-invasive brain-computer interfaces and neurofeedback applications. It also may lead to major improvements in source localization precision. Source localization simulations for the spherical and Boundary Element Method (BEM) head models demonstrated that the localization errors are reduced two-fold when using electric fields instead of electric potentials. We have identified several techniques that could be adapted for the measurement of the electric field vector required for EFEG and anticipate that this study will stimulate new experimental approaches to utilize this new tool for functional brain research.

  20. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  1. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  2. Locating hydrothermal acoustic sources at Old Faithful Geyser using Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Cros, E.; Roux, P.; Vandemeulebrouck, J.; Kedar, S.

    2011-10-01

    In 1992, a large and dense array of geophones was placed around the geyser vent of Old Faithful, in the Yellowstone National Park, to determine the origin of the seismic hydrothermal noise recorded at the surface of the geyser and to understand its dynamics. Old Faithful Geyser (OFG) is a small-scale hydrothermal system where a two-phase flow mixture erupts every 40 to 100 min in a high continuous vertical jet. Using Matched Field Processing (MFP) techniques on 10-min-long signal, we localize the source of the seismic pulses recorded at the surface of the geyser. Several MFP approaches are compared in this study, the frequency-incoherent and frequency-coherent approach, as well as the linear Bartlett processing and the non-linear Minimum Variance Distorsionless Response (MVDR) processing. The different MFP techniques used give the same source position with better focalization in the case of the MVDR processing. The retrieved source position corresponds to the geyser conduit at a depth of 12 m and the localization is in good agreement with in situ measurements made at Old Faithful in past studies.

  3. ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)

    DTIC Science & Technology

    2018-01-18

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING

  4. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  5. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    NASA Astrophysics Data System (ADS)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  6. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  7. Dorsolateral frontal lobe epilepsy.

    PubMed

    Lee, Ricky W; Worrell, Greg A

    2012-10-01

    Dorsolateral frontal lobe seizures often present as a diagnostic challenge. The diverse semiologies may not produce lateralizing or localizing signs and can appear bizarre and suggest psychogenic events. Unfortunately, scalp electroencephalographic (EEG) and magnetic resonance imaging (MRI) are often unsatisfactory. It is not uncommon that these traditional diagnostic studies are either unhelpful or even misleading. In some cases, SPECT and positron emission tomography imaging can be an effective tool to identify the origin of seizures. However, these techniques and other emerging techniques all have limitations, and new approaches are needed to improve source localization.

  8. Advanced dynamic statistical parametric mapping with MEG in localizing epileptogenicity of the bottom of sulcus dysplasia.

    PubMed

    Nakajima, Midori; Wong, Simeon; Widjaja, Elysa; Baba, Shiro; Okanishi, Tohru; Takada, Lynne; Sato, Yosuke; Iwata, Hiroki; Sogabe, Maya; Morooka, Hikaru; Whitney, Robyn; Ueda, Yuki; Ito, Tomoshiro; Yagyu, Kazuyori; Ochi, Ayako; Carter Snead, O; Rutka, James T; Drake, James M; Doesburg, Sam; Takeuchi, Fumiya; Shiraishi, Hideaki; Otsubo, Hiroshi

    2018-06-01

    To investigate whether advanced dynamic statistical parametric mapping (AdSPM) using magnetoencephalography (MEG) can better localize focal cortical dysplasia at bottom of sulcus (FCDB). We analyzed 15 children with diagnosis of FCDB in surgical specimen and 3 T MRI by using MEG. Using AdSPM, we analyzed a ±50 ms epoch relative to each single moving dipole (SMD) and applied summation technique to estimate the source activity. The most active area in AdSPM was defined as the location of AdSPM spike source. We compared spatial congruence between MRI-visible FCDB and (1) dipole cluster in SMD method; and (2) AdSPM spike source. AdSPM localized FCDB in 12 (80%) of 15 children whereas dipole cluster localized six (40%). AdSPM spike source was concordant within seizure onset zone in nine (82%) of 11 children with intracranial video EEG. Eleven children with resective surgery achieved seizure freedom with follow-up period of 1.9 ± 1.5 years. Ten (91%) of them had an AdSPM spike source in the resection area. AdSPM can noninvasively and neurophysiologically localize epileptogenic FCDB, whether it overlaps with the dipole cluster or not. This is the first study to localize epileptogenic FCDB using MEG. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  9. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  10. Fusion-based multi-target tracking and localization for intelligent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2008-04-01

    In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.

  11. Village-Level Identification of Nitrate Sources: Collaboration of Experts and Local Population in Benin, Africa

    NASA Astrophysics Data System (ADS)

    Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.

    2005-12-01

    Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.

  12. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  13. Photothermal lesions in soft tissue induced by optical fiber microheaters.

    PubMed

    Pimentel-Domínguez, Reinher; Moreno-Álvarez, Paola; Hautefeuille, Mathieu; Chavarría, Anahí; Hernández-Cordero, Juan

    2016-04-01

    Photothermal therapy has shown to be a promising technique for local treatment of tumors. However, the main challenge for this technique is the availability of localized heat sources to minimize thermal damage in the surrounding healthy tissue. In this work, we demonstrate the use of optical fiber microheaters for inducing thermal lesions in soft tissue. The proposed devices incorporate carbon nanotubes or gold nanolayers on the tips of optical fibers for enhanced photothermal effects and heating of ex vivo biological tissues. We report preliminary results of small size photothermal lesions induced on mice liver tissues. The morphology of the resulting lesions shows that optical fiber microheaters may render useful for delivering highly localized heat for photothermal therapy.

  14. Partial differential equation-based localization of a monopole source from a circular array.

    PubMed

    Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa

    2013-10-01

    Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.

  15. Sensors research and technology

    NASA Technical Reports Server (NTRS)

    Cutts, James A.

    1988-01-01

    Information on sensors research and technology is given in viewgraph form. Information is given on sensing techniques for space science, passive remote sensing techniques and applications, submillimeter coherent sensing, submillimeter mixers and local oscillator sources, non-coherent sensors, active remote sensing, solid state laser development, a low vibration cooler, separation of liquid helium and vapor phase in zero gravity, and future plans.

  16. Soft-tissue and phase-contrast imaging at the Swiss Light Source

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Mohan, Nishant; Stampanoni, Marco; Muller, Ralph

    2004-05-01

    Recent results show that bone vasculature is a major contributor to local tissue porosity, and therefore can be directly linked to the mechanical properties of bone tissue. With the advent of third generation synchrotron radiation (SR) sources, micro-computed tomography (μCT) with resolutions in the order of 1 μm and better has become feasible. This technique has been employed frequently to analyze trabecular architecture and local bone tissue properties, i.e. the hard or mineralized bone tissue. Nevertheless, less is known about the soft tissues in bone, mainly due to inadequate imaging capabilities. Here, we discuss three different methods and applications to visualize soft tissues. The first approach is referred to as negative imaging. In this case the material around the soft tissue provides the absorption contrast necessary for X-ray based tomography. Bone vasculature from two different mouse strains was investigated and compared qualitatively. Differences were observed in terms of local vessel number and vessel orientation. The second technique represents corrosion casting, which is principally adapted for imaging of vascular systems. The technique of corrosion casting has already been applied successfully at the Swiss Light Source. Using the technology we were able to show that pathological features reminiscent of Alzheimer"s disease could be distinguished in the brain vasculature of APP transgenic mice. The third technique discussed here is phase contrast imaging exploiting the high degree of coherence of third generation synchrotron light sources, which provide the necessary physical conditions for phase contrast. The in-line approach followed here for phase contrast retrieval is a modification of the Gerchberg-Saxton-Fienup type. Several measurements and theoretical thoughts concerning phase contrast imaging are presented, including mathematical phase retrieval. Although up-to-now only phase images have been computed, the approach is now ready to retrieve the phase for a large number of angular positions of the specimen allowing application of holotomography, which is the three-dimensional reconstruction of phase images.

  17. Frequency domain beamforming of magnetoencephalographic beta band activity in epilepsy patients with focal cortical dysplasia.

    PubMed

    Heers, Marcel; Hirschmann, Jan; Jacobs, Julia; Dümpelmann, Matthias; Butz, Markus; von Lehe, Marec; Elger, Christian E; Schnitzler, Alfons; Wellmer, Jörg

    2014-09-01

    Spike-based magnetoencephalography (MEG) source localization is an established method in the presurgical evaluation of epilepsy patients. Focal cortical dysplasias (FCDs) are associated with focal epileptic discharges of variable morphologies in the beta frequency band in addition to single epileptic spikes. Therefore, we investigated the potential diagnostic value of MEG-based localization of spike-independent beta band (12-30Hz) activity generated by epileptogenic lesions. Five patients with FCD IIB underwent MEG. In one patient, invasive EEG (iEEG) was recorded simultaneously with MEG. In two patients, iEEG succeeded MEG, and two patients had MEG only. MEG and iEEG were evaluated for epileptic spikes. Two minutes of iEEG data and MEG epochs with no spikes as well as MEG epochs with epileptic spikes were analyzed in the frequency domain. MEG oscillatory beta band activity was localized using Dynamic Imaging of Coherent Sources. Intralesional beta band activity was coherent between simultaneous MEG and iEEG recordings. Continuous 14Hz beta band power correlated with the rate of interictal epileptic discharges detected in iEEG. In cases where visual MEG evaluation revealed epileptic spikes, the sources of beta band activity localized within <2cm of the epileptogenic lesion as shown on magnetic resonance imaging. This result held even when visually marked epileptic spikes were deselected. When epileptic spikes were detectable in iEEG but not MEG, MEG beta band activity source localization failed. Source localization of beta band activity has the potential to contribute to the identification of epileptic foci in addition to source localization of visually marked epileptic spikes. Thus, this technique may assist in the localization of epileptic foci in patients with suspected FCD. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia

    NASA Astrophysics Data System (ADS)

    Rowell, Colin

    We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.

  19. Assessing the Impact of Local Agency Traffic Safety Training Using Ethnographic Techniques

    ERIC Educational Resources Information Center

    Colling, Timothy K.

    2010-01-01

    Traffic crashes are a significant source of loss of life, personal injury and financial expense in the United States. In 2008 there were 37,261 people killed and an estimated 2,346,000 people injured nationwide in motor vehicle traffic crashes. State and federal agencies are beginning to focus traffic safety improvement effort on local agency…

  20. Auditory Evidence Grids

    DTIC Science & Technology

    2006-01-01

    information of the robot (Figure 1) acquired via laser-based localization techniques. The results are maps of the global soundscape . The algorithmic...environments than noise maps. Furthermore, provided the acoustic localization algorithm can detect the sources, the soundscape can be mapped with many...gathering information about the auditory soundscape in which it is working. In addition to robustness in the presence of noise, it has also been

  1. Detection of localized inclusions of gold nanoparticles in Intralipid-1% by point-radiance spectroscopy

    NASA Astrophysics Data System (ADS)

    Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.

    2011-07-01

    Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.

  2. Detection of localized inclusions of gold nanoparticles in Intralipid-1% by point-radiance spectroscopy.

    PubMed

    Grabtchak, Serge; Palmer, Tyler J; Whelan, William M

    2011-07-01

    Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ∼43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.

  3. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  4. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  5. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  6. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  7. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE PAGES

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; ...

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. Furthermore, while CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  8. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  9. Luminous Infrared Sources in the Local Group: Identifying the Missing Links in Massive Star Evolution

    NASA Astrophysics Data System (ADS)

    Britavskiy, N.; Bonanos, A. Z.; Mehner, A.

    2015-01-01

    We present the first systematic survey of dusty massive stars (RSGs, LBVs, sgB[e]) in nearby galaxies, with the goal of understanding their importance in massive star evolution. Using the fact that these stars are bright in mid-infrared colors due to dust, we provide a technique for selecting and identifying dusty evolved stars based on the results of Bonanos et al. (2009, 2010), Britavskiy et al. (2014), and archival Spitzer/IRAC photometry. We present the results of our spectroscopic follow-up of luminous infrared sources in the Local Group dwarf irregular galaxies: Pegasus, Phoenix, Sextans A and WLM. The survey aims to complete the census of dusty massive stars in the Local Group.

  10. fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization

    NASA Astrophysics Data System (ADS)

    Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda

    2010-03-01

    Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.

  11. Accoustic Localization of Breakdown in Radio Frequency Accelerating Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, Peter Gwin

    Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it wouldmore » be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.« less

  12. Acoustic localization of breakdown in radio frequency accelerating cavities

    NASA Astrophysics Data System (ADS)

    Lane, Peter

    Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it would be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.

  13. Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.

    2011-12-01

    We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.

  14. Water-quality data collected to determine the presence, source, and concentration of lead in the drinking water supply at Pipe Spring National Monument, northern Arizona

    USGS Publications Warehouse

    Macy, Jamie P.; Sharrow, David; Unema, Joel

    2013-01-01

    Pipe Spring National Monument in northern Arizona contains historically significant springs. The groundwater source of these springs is the same aquifer that presently is an important source of drinking water for the Pipe Spring National Monument facilities, the Kaibab Paiute Tribe, and the community of Moccasin. The Kaibab Paiute Tribe monitored lead concentrations from 2004 to 2009; some of the analytical results exceeded the U.S. Environmental Protection Agency action level for treatment technique for lead of 15 parts per billion. The National Park Service and the Kaibab Paiute Tribe were concerned that the local groundwater system that provides the domestic water supply might be contaminated with lead. Lead concentrations in water samples collected by the U.S. Geological Survey from three springs, five wells, two water storage tanks, and one faucet were less than the U.S. Environmental Protection Agency action level for treatment technique. Lead concentrations of rock samples representative of the rock units in which the local groundwater resides were less than 22 parts per million.

  15. Caresoil: A multidisciplinar Project to characterize, remediate, monitor and evaluate the risk of contaminated soils in Madrid (Spain)

    NASA Astrophysics Data System (ADS)

    Muñoz-Martín, Alfonso; Antón, Loreto; Granja, Jose Luis; Villarroya, Fermín; Montero, Esperanza; Rodríguez, Vanesa

    2016-04-01

    Soil contamination can come from diffuse sources (air deposition, agriculture, etc.) or local sources, these last being related to anthropogenic activities that are potentially soil contaminating activities. According to data from the EU, in Spain, and particularly for the Autonomous Community of Madrid, it can be considered that heavy metals, toxic organic compounds (including Non Aqueous Phases Liquids, NAPLs) and combinations of both are the main problem of point sources of soil contamination in our community. The five aspects that will be applied in Caresoil Program (S2013/MAE-2739) in the analysis and remediation of a local soil contamination are: 1) the location of the source of contamination and characterization of soil and aquifer concerned, 2) evaluation of the dispersion of the plume, 3) application of effective remediation techniques, 4) monitoring the evolution of the contaminated soil and 5) risk analysis throughout this process. These aspects involve advanced technologies (hydrogeology, geophysics, geochemistry,...) that require new developing of knowledge, being necessary the contribution of several researching groups specialized in the fields previously cited, as they are those integrating CARESOIL Program. Actually two cases concerning hydrocarbon spills, as representative examples of soil local contamination in Madrid area, are being studied. The first is being remediated and we are monitoring this process to evaluate its effectiveness. In the second location we are defining the extent of contamination in soil and aquifer to define the most effective remediation technique.

  16. Introduction to acoustic emission

    NASA Technical Reports Server (NTRS)

    Possa, G.

    1983-01-01

    Typical acoustic emission signal characteristics are described and techniques which localize the signal source by processing the acoustic delay data from multiple sensors are discussed. The instrumentation, which includes sensors, amplifiers, pulse counters, a minicomputer and output devices is examined. Applications are reviewed.

  17. Time-dependent wave splitting and source separation

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck

    2017-02-01

    Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.

  18. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  19. Head movement compensation in real-time magnetoencephalographic recordings.

    PubMed

    Little, Graham; Boe, Shaun; Bardouille, Timothy

    2014-01-01

    Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.

  20. Study on acoustic emission source localization of 16Mn structural steel of high temperature deformation

    NASA Astrophysics Data System (ADS)

    Zhang, Yubo; Deng, Muhan; Yang, Rui; Jin, Feixiang

    2017-09-01

    The location technique of acoustic emission (AE) source for deformation damage of 16Mn steel in high temperature environment is studied by using linear time-difference-of-arrival (TDOA) location method. The distribution characteristics of strain induced acoustic emission source signals at 20°C and 400°C of tensile specimens were investigated. It is found that the near fault has the location signal of the cluster, which can judge the stress concentration and cause the fracture.

  1. Three-Dimensional Photoactivated Localization Microscopy with Genetically Expressed Probes

    PubMed Central

    Temprine, Kelsey; York, Andrew G.; Shroff, Hari

    2017-01-01

    Photoactivated localization microscopy (PALM) and related single-molecule imaging techniques enable biological image acquisition at ~20 nm lateral and ~50–100 nm axial resolution. Although such techniques were originally demonstrated on single imaging planes close to the coverslip surface, recent technical developments now enable the 3D imaging of whole fixed cells. We describe methods for converting a 2D PALM into a system capable of acquiring such 3D images, with a particular emphasis on instrumentation that is compatible with choosing relatively dim, genetically expressed photoactivatable fluorescent proteins (PA-FPs) as PALM probes. After reviewing the basics of 2D PALM, we detail astigmatic and multiphoton imaging approaches well suited to working with PA-FPs. We also discuss the use of open-source localization software appropriate for 3D PALM. PMID:25391803

  2. Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †

    PubMed Central

    Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke

    2015-01-01

    This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420

  3. Quantifying sources of methane and light alkanes in the Los Angeles Basin, California

    NASA Astrophysics Data System (ADS)

    Peischl, Jeff; Ryerson, Thomas; Atlas, Elliot; Blake, Donald; Brioude, Jerome; Daube, Bruce; de Gouw, Joost; Frost, Gregory; Gentner, Drew; Gilman, Jessica; Goldstein, Allen; Harley, Robert; Holloway, John; Kuster, William; Santoni, Gregory; Trainer, Michael; Wofsy, Steven; Parrish, David

    2013-04-01

    We use ambient measurements to apportion the relative contributions of different source sectors to the methane (CH4) emissions budget of a U.S. megacity. This approach uses ambient measurements of methane and C2-C5 alkanes (ethane through pentanes) and includes source composition information to distinguish between methane emitted from landfills and feedlots, wastewater treatment plants, tailpipe emissions, leaks of dry natural gas in pipelines and/or local seeps, and leaks of locally produced (unprocessed) natural gas. Source composition information can be taken from existing tabulations or developed by direct sampling of emissions using a mobile platform. By including C2-C5 alkane information, a linear combination of these source signatures can be found to match the observed atmospheric enhancement ratios to determine relative emissions strengths. We apply this technique to apportion CH4 emissions in Los Angeles, CA (L.A.) using data from the CalNex field project in 2010. Our analysis of L.A. atmospheric data shows the two largest CH4 sources in the city are emissions of gas from pipelines and/or from geologic seeps (47%), and emissions from landfills (40%). Local oil and gas production is a relatively minor source of CH4, contributing 8% of total CH4 emissions in L.A. Absolute CH4 emissions rates are derived by multiplying the observed CH4/CO enhancement ratio by State of California inventory values for carbon monoxide (CO) emissions in Los Angeles. Apportioning this total suggests that emissions from the combined natural and anthropogenic gas sources account for the differences between top-down and bottom-up CH4 estimates previously published for Los Angeles. Further, total CH4 emission attributed in our analysis to local gas extraction represents 17% of local production. While a derived leak rate of 17% of local production may seem unrealistically high, it is qualitatively consistent with the 12% reported in a recent state inventory survey of the L.A. oil and gas industry.

  4. Application of meteorology-based methods to determine local and external contributions to particulate matter pollution: A case study in Venice (Italy)

    NASA Astrophysics Data System (ADS)

    Squizzato, Stefania; Masiol, Mauro

    2015-10-01

    The air quality is influenced by the potential effects of meteorology at meso- and synoptic scales. While local weather and mixing layer dynamics mainly drive the dispersion of sources at small scales, long-range transports affect the movements of air masses over regional, transboundary and even continental scales. Long-range transport may advect polluted air masses from hot-spots by increasing the levels of pollution at nearby or remote locations or may further raise air pollution levels where external air masses originate from other hot-spots. Therefore, the knowledge of ground-wind circulation and potential long-range transports is fundamental not only to evaluate how local or external sources may affect the air quality at a receptor site but also to quantify it. This review is focussed on establishing the relationships among PM2.5 sources, meteorological condition and air mass origin in the Po Valley, which is one of the most polluted areas in Europe. We have chosen the results from a recent study carried out in Venice (Eastern Po Valley) and have analysed them using different statistical approaches to understand the influence of external and local contribution of PM2.5 sources. External contributions were evaluated by applying Trajectory Statistical Methods (TSMs) based on back-trajectory analysis including (i) back-trajectories cluster analysis, (ii) potential source contribution function (PSCF) and (iii) concentration weighted trajectory (CWT). Furthermore, the relationships between the source contributions and ground-wind circulation patterns were investigated by using (iv) cluster analysis on wind data and (v) conditional probability function (CPF). Finally, local source contribution have been estimated by applying the Lenschow' approach. In summary, the integrated approach of different techniques has successfully identified both local and external sources of particulate matter pollution in a European hot-spot affected by the worst air quality.

  5. Surgical Site Infiltration for Abdominal Surgery: A Novel Neuroanatomical-based Approach

    PubMed Central

    Janis, Jeffrey E.; Haas, Eric M.; Ramshaw, Bruce J.; Nihira, Mikio A.; Dunkin, Brian J.

    2016-01-01

    Background: Provision of optimal postoperative analgesia should facilitate postoperative ambulation and rehabilitation. An optimal multimodal analgesia technique would include the use of nonopioid analgesics, including local/regional analgesic techniques such as surgical site local anesthetic infiltration. This article presents a novel approach to surgical site infiltration techniques for abdominal surgery based upon neuroanatomy. Methods: Literature searches were conducted for studies reporting the neuroanatomical sources of pain after abdominal surgery. Also, studies identified by preceding search were reviewed for relevant publications and manually retrieved. Results: Based on neuroanatomy, an optimal surgical site infiltration technique would consist of systematic, extensive, meticulous administration of local anesthetic into the peritoneum (or preperitoneum), subfascial, and subdermal tissue planes. The volume of local anesthetic would depend on the size of the incision such that 1 to 1.5 mL is injected every 1 to 2 cm of surgical incision per layer. It is best to infiltrate with a 22-gauge, 1.5-inch needle. The needle is inserted approximately 0.5 to 1 cm into the tissue plane, and local anesthetic solution is injected while slowly withdrawing the needle, which should reduce the risk of intravascular injection. Conclusions: Meticulous, systematic, and extensive surgical site local anesthetic infiltration in the various tissue planes including the peritoneal, musculofascial, and subdermal tissues, where pain foci originate, provides excellent postoperative pain relief. This approach should be combined with use of other nonopioid analgesics with opioids reserved for rescue. Further well-designed studies are necessary to assess the analgesic efficacy of the proposed infiltration technique. PMID:28293525

  6. Imaging dipole flow sources using an artificial lateral-line system made of biomimetic hair flow sensors

    PubMed Central

    Dagamseh, Ahmad; Wiegerink, Remco; Lammerink, Theo; Krijnen, Gijs

    2013-01-01

    In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we take advantage of both biomimetic artificial hair-based flow sensors arranged as LSS and beamforming techniques to demonstrate dipole-source localization in air. Modelling and measurement results show the artificial lateral-line ability to image the position of dipole sources accurately with estimation error of less than 0.14 times the array length. This opens up possibilities for flow-based, near-field environment mapping that can be beneficial to, for example, biologists and robot guidance applications. PMID:23594816

  7. An evaluation of kurtosis beamforming in magnetoencephalography to localize the epileptogenic zone in drug resistant epilepsy patients.

    PubMed

    Hall, Michael B H; Nissen, Ida A; van Straaten, Elisabeth C W; Furlong, Paul L; Witton, Caroline; Foley, Elaine; Seri, Stefano; Hillebrand, Arjan

    2018-06-01

    Kurtosis beamforming is a useful technique for analysing magnetoencephalograpy (MEG) data containing epileptic spikes. However, the implementation varies and few studies measure concordance with subsequently resected areas. We evaluated kurtosis beamforming as a means of localizing spikes in drug-resistant epilepsy patients. We retrospectively applied kurtosis beamforming to MEG recordings of 22 epilepsy patients that had previously been analysed using equivalent current dipole (ECD) fitting. Virtual electrodes were placed in the kurtosis volumetric peaks and visually inspected to select a candidate source. The candidate sources were compared to the ECD localizations and resection areas. The kurtosis beamformer produced interpretable localizations in 18/22 patients, of which the candidate source coincided with the resection lobe in 9/13 seizure-free patients and in 3/5 patients with persistent seizures. The sublobar accuracy of the kurtosis beamformer with respect to the resection zone was higher than ECD (56% and 50%, respectively), however, ECD resulted in a higher lobar accuracy (75%, 67%). Kurtosis beamforming may provide additional value when spikes are not clearly discernible on the sensors and support ECD localizations when dipoles are scattered. Kurtosis beamforming should be integrated with existing clinical protocols to assist in localizing the epileptogenic zone. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  8. A simple semi-empirical technique for apportioning the impact of roadways on air quality in an urban neighbourhood

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Dirks, K. N.; Singhal, N.; Costello, S. B.; Longley, I.; Salmond, J. A.

    2014-02-01

    Air pollution from the transport sector has a marked effect on human health, so isolating the pollutant contribution from a roadway is important in understanding its impact on the local neighbourhood. This paper proposes a novel technique based on a semi-empirical air pollution model to quantify the impact from a roadway on the air quality of a local neighbourhood using ambient records of a single air pollution monitor. We demonstrate the proposed technique using a case study, in which we quantify the contribution from a major highway with respect to the local background concentration in Auckland, New Zealand. Comparing the diurnal variation of the model-separated background contribution with real measurements from a site upwind of the highway shows that the model estimates are reliable. Amongst all of the pollutants considered, the best estimations of the background were achieved for nitrogen oxides. Although the multi-pronged approach worked well for predominantly vehicle-related pollutants, it could not be used effectively to isolate emissions of PM10 due to the complex and less predictable influence of natural sources (such as marine aerosols). The proposed approach is useful in situations where ambient records from an upwind background station are not available (as required by other techniques) and is potentially transferable to situations such as intersections and arterial roads. Applying this technique to longer time series could help to understand the changes in pollutant concentrations from the road and background sources for different emission scenarios, for different years or seasons. Modelling results also show the potential of such a hybrid semi-empirical models to contribute to our understanding of the physical parameters determining air quality and to validate emissions inventory data.

  9. Holographic Reconstruction of Photoelectron Diffraction and Its Circular Dichroism for Local Structure Probing

    NASA Astrophysics Data System (ADS)

    Matsui, Fumihiko; Matsushita, Tomohiro; Daimon, Hiroshi

    2018-06-01

    The local atomic structure around a specific element atom can be recorded as a photoelectron diffraction pattern. Forward focusing peaks and diffraction rings around them indicate the directions and distances from the photoelectron emitting atom to the surrounding atoms. The state-of-the-art holography reconstruction algorithm enables us to image the local atomic arrangement around the excited atom in a real space. By using circularly polarized light as an excitation source, the angular momentum transfer from the light to the photoelectron induces parallax shifts in these diffraction patterns. As a result, stereographic images of atomic arrangements are obtained. These diffraction patterns can be used as atomic-site-resolved probes for local electronic structure investigation in combination with spectroscopy techniques. Direct three-dimensional atomic structure visualization and site-specific electronic property analysis methods are reviewed. Furthermore, circular dichroism was also found in valence photoelectron and Auger electron diffraction patterns. The investigation of these new phenomena provides hints for the development of new techniques for local structure probing.

  10. Benefits of Atrial Substrate Modification Guided by Electrogram Similarity and Phase Mapping Techniques to Eliminate Rotors and Focal Sources Versus Conventional Defragmentation in Persistent Atrial Fibrillation.

    PubMed

    Lin, Yenn-Jiang; Lo, Men-Tzung; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Chao, Tze-Fan; Chung, Fa-Po; Liao, Jo-Nan; Lin, Chin-Yu; Kuo, Huan-Yu; Chang, Yi-Chung; Lin, Chen; Tuan, Ta-Chuan; Vincent Young, Hsu-Wen; Suenari, Kazuyoshi; Dan Do, Van Buu; Raharjo, Suunu Budhi; Huang, Norden E; Chen, Shih-Ann

    2016-11-01

    This prospective study compared the efficacy of atrial substrate modification guided by a nonlinear phase mapping technique with that of conventional substrate ablation. The optimal ablation strategy for persistent atrial fibrillation (AF) was unknown. In phase 1 study, we applied a cellular automation technique to simulate the electrical wave propagation to improve the phase mapping algorithm, involving analysis of high-similarity electrogram regions. In addition, we defined rotors and focal AF sources, using the physical parameters of the divergence and curvature forces. In phase 2 study, we enrolled 68 patients with persistent AF undergoing substrate modification into 2 groups, group-1 (n = 34) underwent similarity index (SI) and phase mapping techniques; group-2 (n = 34) received complex fractionated atrial electrogram ablation with commercially available software. Group-1 received real-time waveform similarity measurements in which a phase mapping algorithm was applied to localize the sources. We evaluated the single-procedure freedom from AF. In group-1, we identified an average of 2.6 ± 0.89 SI regions per chamber. These regions involved rotors and focal sources in 65% and 77% of patients in group-1, respectively. Group-1 patients had shorter ablation procedure times, higher termination rates, and significant reduction in AF recurrence compared to group-2 and a trend toward benefit for all atrial arrhythmias. Multivariate analysis showed that substrate mapping using nonlinear similarity and phase mapping was the independent predictor of freedom from AF recurrence (hazard ratio: 0.26; 95% confidence interval: 0.09 to 0.74; p = 0.01). Our study showed that for persistent AF ablation, a specified substrate modification guided by nonlinear phase mapping could eliminate localized re-entry and non-pulmonary focal sources after pulmonary vein isolation. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. Apportionment of urban aerosol sources in Cork (Ireland) by synergistic measurement techniques.

    PubMed

    Dall'Osto, Manuel; Hellebust, Stig; Healy, Robert M; O'Connor, Ian P; Kourtchev, Ivan; Sodeau, John R; Ovadnevaite, Jurgita; Ceburnis, Darius; O'Dowd, Colin D; Wenger, John C

    2014-09-15

    The sources of ambient fine particulate matter (PM2.5) during wintertime at a background urban location in Cork city (Ireland) have been determined. Aerosol chemical analyses were performed by multiple techniques including on-line high resolution aerosol time-of-flight mass spectrometry (Aerodyne HR-ToF-AMS), on-line single particle aerosol time-of-flight mass spectrometry (TSI ATOFMS), on-line elemental carbon-organic carbon analysis (Sunset_EC-OC), and off-line gas chromatography/mass spectrometry and ion chromatography analysis of filter samples collected at 6-h resolution. Positive matrix factorization (PMF) has been carried out to better elucidate aerosol sources not clearly identified when analyzing results from individual aerosol techniques on their own. Two datasets have been considered: on-line measurements averaged over 2-h periods, and both on-line and off-line measurements averaged over 6-h periods. Five aerosol sources were identified by PMF in both datasets, with excellent agreement between the two solutions: (1) regional domestic solid fuel burning--"DSF_Regional," 24-27%; (2) local urban domestic solid fuel burning--"DSF_Urban," 22-23%; (3) road vehicle emissions--"Traffic," 15-20%; (4) secondary aerosols from regional anthropogenic sources--"SA_Regional" 9-13%; and (5) secondary aged/processed aerosols related to urban anthropogenic sources--"SA_Urban," 21-26%. The results indicate that, despite regulations for restricting the use of smoky fuels, solid fuel burning is the major source (46-50%) of PM2.5 in wintertime in Cork, and also likely other areas of Ireland. Whilst wood combustion is strongly associated with OC and EC, it was found that peat and coal combustion is linked mainly with OC and the aerosol from these latter sources appears to be more volatile than that produced by wood combustion. Ship emissions from the nearby port were found to be mixed with the SA_Regional factor. The PMF analysis allowed us to link the AMS cooking organic aerosol factor (AMS_PMF_COA) to oxidized organic aerosol, chloride and locally produced nitrate, indicating that AMS_PMF_COA cannot be attributed to primary cooking emissions only. Overall, there are clear benefits from factor analysis applied to results obtained from multiple techniques, which allows better association of aerosols with sources and atmospheric processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. 40 CFR 35.661 - Competitive process.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... training in source reduction techniques. Such training may be provided through local engineering schools or... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Competitive process. 35.661 Section 35.661 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE...

  13. Opportunities for Live Cell FT-Infrared Imaging: Macromolecule Identification with 2D and 3D Localization

    PubMed Central

    Mattson, Eric C.; Aboualizadeh, Ebrahim; Barabas, Marie E.; Stucky, Cheryl L.; Hirschmugl, Carol J.

    2013-01-01

    Infrared (IR) spectromicroscopy, or chemical imaging, is an evolving technique that is poised to make significant contributions in the fields of biology and medicine. Recent developments in sources, detectors, measurement techniques and speciman holders have now made diffraction-limited Fourier transform infrared (FTIR) imaging of cellular chemistry in living cells a reality. The availability of bright, broadband IR sources and large area, pixelated detectors facilitate live cell imaging, which requires rapid measurements using non-destructive probes. In this work, we review advances in the field of FTIR spectromicroscopy that have contributed to live-cell two and three-dimensional IR imaging, and discuss several key examples that highlight the utility of this technique for studying the structure and chemistry of living cells. PMID:24256815

  14. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  15. Source mechanics for monochromatic icequakes produced during iceberg calving at Columbia Glacier, AK

    USGS Publications Warehouse

    O'Neel, Shad; Pfeffer, W.T.

    2007-01-01

    Seismograms recorded during iceberg calving contain information pertaining to source processes during calving events. However, locally variable material properties may cause signal distortions, known as site and path effects, which must be eliminated prior to commenting on source mechanics. We applied the technique of horizontal/vertical spectral ratios to passive seismic data collected at Columbia Glacier, AK, and found no dominant site or path effects. Rather, monochromatic waveforms generated by calving appear to result from source processes. We hypothesize that a fluid-filled crack source model offers a potential mechanism for observed seismograms produced by calving, and fracture-processes preceding calving.

  16. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  17. Experimental and numerical study of impact of voltage fluctuate, flicker and power factor wave electric generator to local distribution

    NASA Astrophysics Data System (ADS)

    Hadi, Nik Azran Ab; Rashid, Wan Norhisyam Abd; Hashim, Nik Mohd Zarifie; Mohamad, Najmiah Radiah; Kadmin, Ahmad Fauzan

    2017-10-01

    Electricity is the most powerful energy source in the world. Engineer and technologist combined and cooperated to invent a new low-cost technology and free carbon emission where the carbon emission issue is a major concern now due to global warming. Renewable energy sources such as hydro, wind and wave are becoming widespread to reduce the carbon emissions, on the other hand, this effort needs several novel methods, techniques and technologies compared to coal-based power. Power quality of renewable sources needs in depth research and endless study to improve renewable energy technologies. The aim of this project is to investigate the impact of renewable electric generator on its local distribution system. The power farm was designed to connect to the local distribution system and it will be investigated and analyzed to make sure that energy which is supplied to customer is clean. The MATLAB tools are used to simulate the overall analysis. At the end of the project, a summary of identifying various voltage fluctuates data sources is presented in terms of voltage flicker. A suggestion of the analysis impact of wave power generation on its local distribution is also presented for the development of wave generator farms.

  18. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  19. Acceleration of 500 keV Negative Ion Beams By Tuning Vacuum Insulation Distance On JT-60 Negative Ion Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, A.; Hanada, M.; Tanaka, Y.

    2011-09-26

    Acceleration of a 500 keV beam up to 2.8 A has been achieved on a JT-60U negative ion source with a three-stage accelerator by overcoming low voltage holding which is one of the critical issues for realization of the JT-60SA ion source. In order to improve the voltage holding, preliminary voltage holding tests with small-size grids with uniform and locally intense electric fields were carried out, and suggested that the voltage holding was degraded by both the size and local electric field effects. Therefore, the local electric field was reduced by tuning gap lengths between the large size grids andmore » grid support structures of the accelerator. Moreover, a beam radiation shield which limited extension of the minimum gap length was also optimized so as to reduce the local electric field while maintaining the shielding effect. These modifications were based on the experiment results, and significantly increased the voltage holding from <150 kV/stage for the original configuration to 200 kV/stage. These techniques for improvement of voltage holding should also be applicable to other large ion sources accelerators such as those for ITER.« less

  20. Study of atmospheric dynamics and pollution in the coastal area of English Channel using clustering technique

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Dmitriev, Egor; Delbarre, Hervé; Augustin, Patrick; Gengembre, Cyril; Fourmenten, Marc

    2016-04-01

    The problem of atmospheric contamination by principal air pollutants was considered in the industrialized coastal region of English Channel in Dunkirk influenced by north European metropolitan areas. MESO-NH nested models were used for the simulation of the local atmospheric dynamics and the online calculation of Lagrangian backward trajectories with 15-minute temporal resolution and the horizontal resolution down to 500 m. The one-month mesoscale numerical simulation was coupled with local pollution measurements of volatile organic components, particulate matter, ozone, sulphur dioxide and nitrogen oxides. Principal atmospheric pathways were determined by clustering technique applied to backward trajectories simulated. Six clusters were obtained which describe local atmospheric dynamics, four winds blowing through the English Channel, one coming from the south, and the biggest cluster with small wind speeds. This last cluster includes mostly sea breeze events. The analysis of meteorological data and pollution measurements allows relating the principal atmospheric pathways with local air contamination events. It was shown that contamination events are mostly connected with a channelling of pollution from local sources and low-turbulent states of the local atmosphere.

  1. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    PubMed

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  2. Improving MEG source localizations: an automated method for complete artifact removal based on independent component analysis.

    PubMed

    Mantini, D; Franciotti, R; Romani, G L; Pizzella, V

    2008-03-01

    The major limitation for the acquisition of high-quality magnetoencephalography (MEG) recordings is the presence of disturbances of physiological and technical origins: eye movements, cardiac signals, muscular contractions, and environmental noise are serious problems for MEG signal analysis. In the last years, multi-channel MEG systems have undergone rapid technological developments in terms of noise reduction, and many processing methods have been proposed for artifact rejection. Independent component analysis (ICA) has already shown to be an effective and generally applicable technique for concurrently removing artifacts and noise from the MEG recordings. However, no standardized automated system based on ICA has become available so far, because of the intrinsic difficulty in the reliable categorization of the source signals obtained with this technique. In this work, approximate entropy (ApEn), a measure of data regularity, is successfully used for the classification of the signals produced by ICA, allowing for an automated artifact rejection. The proposed method has been tested using MEG data sets collected during somatosensory, auditory and visual stimulation. It was demonstrated to be effective in attenuating both biological artifacts and environmental noise, in order to reconstruct clear signals that can be used for improving brain source localizations.

  3. Thermographic Imaging of Material Loss in Boiler Water-Wall Tubing by Application of Scanning Line Source

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.

    2000-01-01

    Localized wall thinning due to corrosion in utility boiler water-wall tubing is a significant inspection concern for boiler operators. Historically, conventional ultrasonics has been used for inspection of these tubes. This technique has proven to be very manpower and time intensive. This has resulted in a spot check approach to inspections, documenting thickness measurements over a relatively small percentage of the total boiler wall area. NASA Langley Research Center has developed a thermal NDE technique designed to image and quantitatively characterize the amount of material thinning present in steel tubing. The technique involves the movement of a thermal line source across the outer surface of the tubing followed by an infrared imager at a fixed distance behind the line source. Quantitative images of the material loss due to corrosion are reconstructed from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to reconstruct images of flaws. The application of the thermal line source coupled with the analysis technique represents a significant improvement in the inspection speed for large structures such as boiler water-walls. A theoretical basis for the technique will be presented which explains the quantitative nature of the technique. Further, a dynamic calibration system will be presented for the technique that allows the extraction of thickness information from the temperature data. Additionally, the results of applying this technology to actual water-wall tubing samples and in situ inspections will be presented.

  4. Study of curved and planar frequency-selective surfaces with nonplanar illumination

    NASA Technical Reports Server (NTRS)

    Caroglanian, Armen; Webb, Kevin J.

    1991-01-01

    A locally planar technique (LPT) is investigated for determining the forward-scattered field from a generally shaped inductive frequency-selective surface (FSS) with nonplanar illumination. The results of an experimental study are presented to assess the LPT accuracy. The effects of a nonplanar incident field are determined by comparing the LPT numerical results with a series of experiments with the feed source placed at varying distances from the planar FSS. The limitations of the LPT model due to surface curvature are investigated in an experimental study of the scattered fields from a set of hyperbolic cylinders of different curvatures. From these comparisons, guidelines for applying the locally planar technique are developed.

  5. Acoustic emission source localization based on distance domain signal representation

    NASA Astrophysics Data System (ADS)

    Gawronski, M.; Grabowski, K.; Russek, P.; Staszewski, W. J.; Uhl, T.; Packo, P.

    2016-04-01

    Acoustic emission is a vital non-destructive testing technique and is widely used in industry for damage detection, localisation and characterization. The latter two aspects are particularly challenging, as AE data are typically noisy. What is more, elastic waves generated by an AE event, propagate through a structural path and are significantly distorted. This effect is particularly prominent for thin elastic plates. In these media the dispersion phenomenon results in severe localisation and characterization issues. Traditional Time Difference of Arrival methods for localisation techniques typically fail when signals are highly dispersive. Hence, algorithms capable of dispersion compensation are sought. This paper presents a method based on the Time - Distance Domain Transform for an accurate AE event localisation. The source localisation is found through a minimization problem. The proposed technique focuses on transforming the time signal to the distance domain response, which would be recorded at the source. Only, basic elastic material properties and plate thickness are used in the approach, avoiding arbitrary parameters tuning.

  6. Electrical source localization by LORETA in patients with epilepsy: Confirmation by postoperative MRI

    PubMed Central

    Akdeniz, Gülsüm

    2016-01-01

    Background: Few studies have been conducted that have compared electrical source localization (ESL) results obtained by analyzing ictal patterns in scalp electroencephalogram (EEG) with the brain areas that are found to be responsible for seizures using other brain imaging techniques. Additionally, adequate studies have not been performed to confirm the accuracy of ESL methods. Materials and Methods: In this study, ESL was conducted using LORETA (Low Resolution Brain Electromagnetic Tomography) in 9 patients with lesions apparent on magnetic resonance imaging (MRI) and in 6 patients who did not exhibit lesions on their MRIs. EEGs of patients who underwent surgery for epilepsy and had follow-ups for at least 1 year after operations were analyzed for ictal spike, rhythmic, paroxysmal fast, and obscured EEG activities. Epileptogenic zones identified in postoperative MRIs were then compared with localizations obtained by LORETA model we employed. Results: We found that brain areas determined via ESL were in concordance with resected brain areas for 13 of the 15 patients evaluated, and those 13 patients were post-operatively determined as being seizure-free. Conclusion: ESL, which is a noninvasive technique, may contribute to the correct delineation of epileptogenic zones in patients who will eventually undergo surgery to treat epilepsy, (regardless of neuroimaging status). Moreover, ESL may aid in deciding on the number and localization of intracranial electrodes to be used in patients who are candidates for invasive recording. PMID:27011626

  7. Electrical source localization by LORETA in patients with epilepsy: Confirmation by postoperative MRI.

    PubMed

    Akdeniz, Gülsüm

    2016-01-01

    Few studies have been conducted that have compared electrical source localization (ESL) results obtained by analyzing ictal patterns in scalp electroencephalogram (EEG) with the brain areas that are found to be responsible for seizures using other brain imaging techniques. Additionally, adequate studies have not been performed to confirm the accuracy of ESL methods. In this study, ESL was conducted using LORETA (Low Resolution Brain Electromagnetic Tomography) in 9 patients with lesions apparent on magnetic resonance imaging (MRI) and in 6 patients who did not exhibit lesions on their MRIs. EEGs of patients who underwent surgery for epilepsy and had follow-ups for at least 1 year after operations were analyzed for ictal spike, rhythmic, paroxysmal fast, and obscured EEG activities. Epileptogenic zones identified in postoperative MRIs were then compared with localizations obtained by LORETA model we employed. We found that brain areas determined via ESL were in concordance with resected brain areas for 13 of the 15 patients evaluated, and those 13 patients were post-operatively determined as being seizure-free. ESL, which is a noninvasive technique, may contribute to the correct delineation of epileptogenic zones in patients who will eventually undergo surgery to treat epilepsy, (regardless of neuroimaging status). Moreover, ESL may aid in deciding on the number and localization of intracranial electrodes to be used in patients who are candidates for invasive recording.

  8. Proof of Concept for an Ultrasensitive Technique to Detect and Localize Sources of Elastic Nonlinearity Using Phononic Crystals.

    PubMed

    Miniaci, M; Gliozzi, A S; Morvan, B; Krushynska, A; Bosia, F; Scalerandi, M; Pugno, N M

    2017-05-26

    The appearance of nonlinear effects in elastic wave propagation is one of the most reliable and sensitive indicators of the onset of material damage. However, these effects are usually very small and can be detected only using cumbersome digital signal processing techniques. Here, we propose and experimentally validate an alternative approach, using the filtering and focusing properties of phononic crystals to naturally select and reflect the higher harmonics generated by nonlinear effects, enabling the realization of time-reversal procedures for nonlinear elastic source detection. The proposed device demonstrates its potential as an efficient, compact, portable, passive apparatus for nonlinear elastic wave sensing and damage detection.

  9. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  10. Muscle and eye movement artifact removal prior to EEG source localization.

    PubMed

    Hallez, Hans; Vergult, Anneleen; Phlypo, Ronald; Van Hese, Peter; De Clercq, Wim; D'Asseler, Yves; Van de Walle, Rik; Vanrumste, Bart; Van Paesschen, Wim; Van Huffel, Sabine; Lemahieu, Ignace

    2006-01-01

    Muscle and eye movement artifacts are very prominent in the ictal EEG of patients suffering from epilepsy, thus making the dipole localization of ictal activity very unreliable. Recently, two techniques (BSS-CCA and pSVD) were developed to remove those artifacts. The purpose of this study is to assess whether the removal of muscle and eye movement artifacts improves the EEG dipole source localization. We used a total of 8 EEG fragments, each from another patient, first unfiltered, then filtered by the BSS-CCA and pSVD. In both the filtered and unfiltered EEG fragments we estimated multiple dipoles using RAP-MUSIC. The resulting dipoles were subjected to a K-means clustering algorithm, to extract the most prominent cluster. We found that the removal of muscle and eye artifact results to tighter and more clear dipole clusters. Furthermore, we found that localization of the filtered EEG corresponded with the localization derived from the ictal SPECT in 7 of the 8 patients. Therefore, we can conclude that the BSS-CCA and pSVD improve localization of ictal activity, thus making the localization more reliable for the presurgical evaluation of the patient.

  11. Independent component analysis of EEG dipole source localization in resting and action state of brain

    NASA Astrophysics Data System (ADS)

    Almurshedi, Ahmed; Ismail, Abd Khamim

    2015-04-01

    EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.

  12. Digital Forensics Using Local Signal Statistics

    ERIC Educational Resources Information Center

    Pan, Xunyu

    2011-01-01

    With the rapid growth of the Internet and the popularity of digital imaging devices, digital imagery has become our major information source. Meanwhile, the development of digital manipulation techniques employed by most image editing software brings new challenges to the credibility of photographic images as the definite records of events. We…

  13. Experimental testing of the noise-canceling processor.

    PubMed

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  14. Heterodyne Spectroscopy in the Thermal Infrared Region: A Window on Physics and Chemistry

    NASA Technical Reports Server (NTRS)

    Kostiuk, Theodor

    2004-01-01

    The thermal infrared region contains molecular bands of many of the most important species in gaseous astronomical sources. True shapes and frequencies of emission and absorption spectral lines from these constituents of planetary and stellar atmospheres contain unique information on local temperature and abundance distribution, non-thermal effects, composition, local dynamics and winds. Heterodyne spectroscopy in the thermal infrared can remotely measure true line shapes in relatively cool and thin regions and enable the retrieval of detailed information about local physics and chemistry. The concept and techniques for heterodyne detection will be discussed including examples of thermal infrared photomixers and instrumentation used in studies of several astronomical sources. Use of heterodyne detection to study non-LTE phenomena, planetary aurora, minor planetary species and gas velocities (winds) will be discussed. A discussion of future technological developments and relation to space flight missions will be addressed.

  15. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  16. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  17. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  18. Acoustic sources of opportunity in the marine environment - Applied to source localization and ocean sensing

    NASA Astrophysics Data System (ADS)

    Verlinden, Christopher M.

    Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.

  19. Single-source PPG-based local pulse wave velocity measurement: a potential cuffless blood pressure estimation technique.

    PubMed

    Nabeel, P M; Jayaraj, J; Mohanasankar, S

    2017-11-30

    A novel photoplethysmograph probe employing dual photodiodes excited using a single infrared light source was developed for local pulse wave velocity (PWV) measurement. The potential use of the proposed system in cuffless blood pressure (BP) techniques was demonstrated. Initial validation measurements were performed on a phantom using a reference method. Further, an in vivo study was carried out in 35 volunteers (age  =  28  ±  4.5 years). The carotid local PWV, carotid to finger pulse transit time (PTT R ) and pulse arrival time at the carotid artery (PAT C ) were simultaneously measured. Beat-by-beat variation of the local PWV due to BP changes was studied during post-exercise relaxation. The cuffless BP estimation accuracy of local PWV, PAT C , and PTT R was investigated based on inter- and intra-subject models with best-case calibration. The accuracy of the proposed system, hardware inter-channel delay (<0.1 ms), repeatability (beat-to-beat variation  =  4.15%-11.38%) and reproducibility of measurement (r  =  0.96) were examined. For the phantom experiment, the measured PWV values did not differ by more than 0.74 m s -1 compared to the reference PWV. Better correlation was observed between brachial BP parameters versus local PWV (r  =  0.74-0.78) compared to PTT R (|r|  =  0.62-0.67) and PAT C (|r|  =  0.52-0.68). Cuffless BP estimation using local PWV was better than PTT R and PAT C with population-specific models. More accurate estimates of arterial BP levels were achieved using local PWV via subject-specific models (root-mean-square error  ⩽2.61 mmHg). A reliable system for cuffless BP measurement and local estimation of arterial wall properties.

  20. Characterizing Methane Emissions at Local Scales with a 20 Year Total Hydrocarbon Time Series, Imaging Spectrometry, and Web Facilitated Analysis

    NASA Astrophysics Data System (ADS)

    Bradley, Eliza Swan

    Methane is an important greenhouse gas for which uncertainty in local emission strengths necessitates improved source characterizations. Although CH4 plume mapping did not motivate the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) design and municipal air quality monitoring stations were not intended for studying marine geological seepage, these assets have capabilities that can make them viable for studying concentrated (high flux, highly heterogeneous) CH4 sources, such as the Coal Oil Point (COP) seep field (˜0.015 Tg CH4 yr-1) offshore Santa Barbara, California. Hourly total hydrocarbon (THC) data, spanning 1990 to 2008 from an air pollution station located near COP, were analyzed and showed geologic CH4 emissions as the dominant local source. A band ratio approach was developed and applied to high glint AVIRIS data over COP, resulting in local-scale mapping of natural atmospheric CH4 plumes. A Cluster-Tuned Matched Filter (CTMF) technique was applied to Gulf of Mexico AVIRIS data to detect CH4 venting from offshore platforms. Review of 744 platform-centered CTMF subsets was facilitated through a flexible PHP-based web portal. This dissertation demonstrates the value of investigating municipal air quality data and imaging spectrometry for gathering insight into concentrated methane source emissions and highlights how flexible web-based solutions can help facilitate remote sensing research.

  1. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  2. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  3. Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review.

    PubMed

    Pu, Yi; Cheyne, Douglas O; Cornwell, Brian R; Johnson, Blake W

    2018-01-01

    Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG "deep source imaging" of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations.

  4. Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review

    PubMed Central

    Pu, Yi; Cheyne, Douglas O.; Cornwell, Brian R.; Johnson, Blake W.

    2018-01-01

    Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG “deep source imaging” of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations. PMID:29755314

  5. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

    PubMed Central

    Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin

    2011-01-01

    Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875

  6. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  7. High temperature acoustic and hybrid microwave/acoustic levitators for materials processing

    NASA Technical Reports Server (NTRS)

    Barmatz, Martin

    1990-01-01

    The physical acoustics group at the Jet Propulsion Laboratory developed a single mode acoustic levitator technique for advanced containerless materials processing. The technique was successfully demonstrated in ground based studies to temperatures of about 1000 C in a uniform temperature furnace environment and to temperatures of about 1500 C using laser beams to locally heat the sample. Researchers are evaluating microwaves as a more efficient means than lasers for locally heating a positioned sample. Recent tests of a prototype single mode hybrid microwave/acoustic levitator successfully demonstrated the feasibility of using microwave power as a heating source. The potential advantages of combining acoustic positioning forces and microwave heating for containerless processing investigations are presented in outline form.

  8. Volcanic deformation sources associated with Fogo 2011-2012 unrest, Azores - The first modelling result

    NASA Astrophysics Data System (ADS)

    Okada, Jun; Araújo, João; Bonforte, Alessandro; Guglielmino, Francesco; Lorenzo, Maria; Ferreira, Teresa

    2016-04-01

    Volcanic deformation is often observed at many active volcanoes in the world by using space geodesy techniques, namely GNSS and InSAR. More difficulties in judgement if eruptions are imminent or not arise when such phenomenon occurs at dormant volcanoes due to the lack of eruption experiences with monitoring data. The eruption triggering mechanism is still controversial at many cases, but many attempts to image deformation sources beneath volcanoes have been made using geophysical inversion techniques. In this study, we show the case study of Fogo (Água de Pau) volcano, S. Miguel Island, Azores which represents over 450 years of eruption dormancy since 1563-1564. In the recent decades Fogo has exhibited three prominent unrest episodes (1989, 2003-2006, and 2011-2012). The lack of geochemical and hydrothermal evidences for a magmatic intrusion during those episodes does not encourage discussions on resuming volcanic activity of Fogo. However, the inflation/uplift are evident on the edifices at least for the last two unrest episodes based on GPS data by Trota et al. (2009) and Okada et al. (2015), respectively. The preliminary deformation modelling based on repeated GPS campaign data suggested a shallow expanding spheroid (Trota et al. 2009) or a single Mogi sources beneath the summit caldera. We performed a more integrated inversion for the 2011-2012 episode using a genetic algorithm optimizing the source parameters. The best fit model agrees well with the regional/local tectonic lineament suggesting the close relation between the volcanic sources and the regional/local tectonics. The regional extensional stress (between Eurasia and Nubia plates) may play important roles for the ascent of volcanic fluids at Fogo volcano. We do not discard the possibility that Fogo may have been preparing for eruptions by intermittent ascents of magma at shallow crust (i.e. experiencing "failed eruptions") during the apparent dormant period. As a local monitoring agency, CIVISA (Center for Information and Seismovolcanic Surveillance of the Azores) continues to monitor Fogo's deformation in order to track changes in the source processes (source position and geometry, volume, pressure, etc.) as well as Fogo's seismicity and geochemistry.

  9. L'Evolution des Galaxies Infrarouges: des observations cosmologiques avec ISO à une modélisation de l'infrarouge moyen au submillimétrique

    NASA Astrophysics Data System (ADS)

    Dole, H.

    2000-10-01

    This thesis deals with the analysis of the FIRBACK deep survey performed in the far infrared at 170 microns with the Infrared Space Observatory, whose aim is the study of the galaxies contributing to the Cosmic Infrared Background, and with the modellisation of galaxy evolution in the mid-infrared to submillimeter range. The FIRBACK survey covers 3.89 square degrees in 3 high galactic latitude and low foreground emission fields (2 of which are in the northern sky). I first present the techniques of reduction, processing and calibration of the ISOPHOT cosmological data. I show that there is a good agreement between PHOT and DIRBE on extended emission, thanks to the derivation of the PHOT footprint. Final maps are created, and the survey is confusion limited at (sigma = 45 mJy). I present then the techniques of source extraction and the simulations for photometry needed to build the final catalog of 106 sources between 180 mJy (4 sigma) and 2.4 Jy. The complementary catalog is made of 90 sources between 135 and 180 mJy. Galaxy counts show a large excess with respect to local counts or models (with and without evolution), only compatible with strong evolution scenarios. The Cosmic Infrared Background (CIB) is resolved at 4% at 170 microns. The identifications of the sources at other wavelengths suggest that most of the sources are local, but a non negligible part lies above redshift 1. I have developped a phenomenological model of galaxy evolution in order to constrain galaxy evolution in the infrared and to have a better understanding of what the FIRBACK sources are. Using the local Luminosity Function (LF), and template spectra of starburst galaxies, it is possible to constrain the evolution of the LF using all the available data: deep source counts at 15, 170 and 850 microns and the CIB spectrum. I show that galaxy evolution is dominated by a high infrared luminosity population, peaking at 2.0 1011 solar luminosities. Redshift distributions are in agreement with available observations. Predictions are possible with our model for the forthcoming space missions such as SIRTF, Planck and FIRST.

  10. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And localization results based on the SQP-GA are compared with some algorithms such as the GA, some other intelligent and non-intelligent algorithms. The results of calculating examples both stimulated and spot experiments demonstrate that the localization method based on the SQP-GA can effectively prevent the results from getting trapped into the local optimum values, and the localization method is of great feasibility and very suitable for the field applications, and the precision of localization is enhanced, and the effectiveness of localization is ideal and satisfactory.

  11. Crosscutting Airborne Remote Sensing Technologies for Oil and Gas and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Aubrey, A. D.; Frankenberg, C.; Green, R. O.; Eastwood, M. L.; Thompson, D. R.; Thorpe, A. K.

    2015-01-01

    Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote sensing technique used to generate 2-dimensional maps of surface properties over large spatial areas. Traditional applications for passive airborne imaging spectroscopy include interrogation of surface composition, such as mapping of vegetation diversity and surface geological composition. Two recent applications are particularly relevant to the needs of both the oil and gas as well as government sectors: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse gas components. These techniques provide valuable capabilities for petroleum seepage in addition to detection and quantification of fugitive emissions. New empirical data that provides insight into the source strength of anthropogenic methane will be reviewed, with particular emphasis on the evolving constraints enabled by new methane remote sensing techniques. Contemporary studies attribute high-strength point sources as significantly contributing to the national methane inventory and underscore the need for high performance remote sensing technologies that provide quantitative leak detection. Imaging sensors that map spatial distributions of methane anomalies provide effective techniques to detect, localize, and quantify fugitive leaks. Airborne remote sensing instruments provide the unique combination of high spatial resolution (<1 m) and large coverage required to directly attribute methane emissions to individual emission sources. This capability cannot currently be achieved using spaceborne sensors. In this study, results from recent NASA remote sensing field experiments focused on point-source leak detection, will be highlighted. This includes existing quantitative capabilities for oil and methane using state-of-the-art airborne remote sensing instruments. While these capabilities are of interest to NASA for assessment of environmental impact and global climate change, industry similarly seeks to detect and localize leaks of both oil and methane across operating fields. In some cases, higher sensitivities desired for upstream and downstream applications can only be provided by new airborne remote sensing instruments tailored specifically for a given application. There exists a unique opportunity for alignment of efforts between commercial and government sectors to advance the next generation of instruments to provide more sensitive leak detection capabilities, including those for quantitative source strength determination.

  12. A review of second law techniques applicable to basic thermal science research

    NASA Astrophysics Data System (ADS)

    Drost, M. Kevin; Zamorski, Joseph R.

    1988-11-01

    This paper reports the results of a review of second law analysis techniques which can contribute to basic research in the thermal sciences. The review demonstrated that second law analysis has a role in basic thermal science research. Unlike traditional techniques, second law analysis accurately identifies the sources and location of thermodynamic losses. This allows the development of innovative solutions to thermal science problems by directing research to the key technical issues. Two classes of second law techniques were identified as being particularly useful. First, system and component investigations can provide information of the source and nature of irreversibilities on a macroscopic scale. This information will help to identify new research topics and will support the evaluation of current research efforts. Second, the differential approach can provide information on the causes and spatial and temporal distribution of local irreversibilities. This information enhances the understanding of fluid mechanics, thermodynamics, and heat and mass transfer, and may suggest innovative methods for reducing irreversibilities.

  13. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  14. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  15. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  16. Multiphoton imaging with high peak power VECSELs

    NASA Astrophysics Data System (ADS)

    Mirkhanov, Shamil; Quarterman, Adrian H.; Swift, Samuel; Praveen, Bavishna B.; Smyth, Conor J. C.; Wilcox, Keith G.

    2016-03-01

    Multiphoton imaging (MMPI) has become one of thee key non-invasive light microscopy techniques. This technique allows deep tissue imaging with high resolution and less photo-damage than conventional confocal microscopy. MPI is type of laser-scanning microscopy that employs localized nonlinear excitation, so that fluorescence is excited only with is scanned focal volume. For many years, Ti: sapphire femtosecond lasers have been the leading light sources for MPI applications. However, recent developments in laser sources and new types of fluorophores indicate that longer wavelength excitation could be a good alternative for these applications. Mode-locked VECSEELs have the potential to be low cost, compact light sources for MPI systems, with the additional advantage of broad wavelength coverage through use of different semiconductor material systems. Here, we use a femtosecond fibber laser to investigate the effect average power and repetition rate has on MPI image quality, to allow us to optimize our mode-locked VVECSELs for MPI.

  17. Comparison of Highly Resolved Model-Based Exposure Metrics for Traffic-Related Air Pollutants to Support Environmental Health Studies

    EPA Science Inventory

    Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, beca...

  18. Deblurring

    NASA Technical Reports Server (NTRS)

    Gevins, A.; Le, J.; Leong, H.; McEvoy, L. K.; Smith, M. E.

    1999-01-01

    In most instances, traditional EEG methodology provides insufficient spatial detail to identify relationships between brain electrical events and structures and functions visualized by magnetic resonance imaging or positron emission tomography. This article describes a method called Deblurring for increasing the spatial detail of the EEG and for fusing neurophysiologic and neuroanatomic data. Deblurring estimates potentials near the outer convexity of the cortex using a realistic finite element model of the structure of a subject's head determined from their magnetic resonance images. Deblurring is not a source localization technique and thus makes no assumptions about the number or type of generator sources. The validity of Deblurring has been initially tested by comparing deblurred data with potentials measured with subdural grid recordings. Results suggest that deblurred topographic maps, registered with a subject's magnetic resonance imaging and rendered in three dimensions, provide better spatial detail than has heretofore been obtained with scalp EEG recordings. Example results are presented from research studies of somatosensory stimulation, movement, language, attention and working memory. Deblurred ictal EEG data are also presented, indicating that this technique may have future clinical application as an aid to seizure localization and surgical planning.

  19. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  20. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  1. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  2. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  3. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  4. Present Kinematic Regime and Recent Seismicity of Gulf Suez, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, G.-E. A.; Abd El-Aal, A. K.

    2018-01-01

    In this study we computed recent seismicity and present kinematic regime in the northern and middle zones of Gulf of Suez as inferred from moment tensor settlings and focal mechanism of local earthquakes that happened in this region. On 18 and 22 of July, 2014 two moderate size earthquakes of local magnitudes 4.2 and 4.1 struck the northern zone of Gulf of Suez near Suez City. These events are instrumentally recorded by Egyptian National Seismic Network (ENSN). The earthquakes have been felt at Suez City and greater Cairo metropolitan zone while no losses were reported. The source mechanism and source parameters of the calculated events were considered by the near-source waveform data listed at very broadband stations of ENSN and supported by the P-wave polarity data of short period stations. The new settling method and software used deem the action of the source time function, which has been ignored in most of the program series of the moment tensor settling analysis with near source seismograms. The obtained results from settling technique indicate that the estimated seismic moments of both earthquakes are 0.6621E + 15 and 0.4447E + 15 Nm conforming to a moment magnitude Mw 3.8 and 3.7 respectively. The fault plan settlings obtained from both settling technique and polarity of first-arrival indicate the dominance of normal faulting. We also evaluated the stress field in north and middle zones of Gulf of Suez using a multiple inverse method. The prime strain axis shows that the deformation is taken up mainly as stretching in the E-W and NE-SW direction.

  5. Regional Body-Wave Attenuation Using a Coda Source Normalization Method: Application to MEDNET Records of Earthquakes in Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Mayeda, K; Malagnini, L

    2007-02-01

    We develop a new methodology to determine apparent attenuation for the regional seismic phases Pn, Pg, Sn, and Lg using coda-derived source spectra. The local-to-regional coda methodology (Mayeda, 1993; Mayeda and Walter, 1996; Mayeda et al., 2003) is a very stable way to obtain source spectra from sparse networks using as few as one station, even if direct waves are clipped. We develop a two-step process to isolate the frequency-dependent Q. First, we correct the observed direct wave amplitudes for an assumed geometrical spreading. Next, an apparent Q, combining path and site attenuation, is determined from the difference between themore » spreading-corrected amplitude and the independently determined source spectra derived from the coda methodology. We apply the technique to 50 earthquakes with magnitudes greater than 4.0 in central Italy as recorded by MEDNET broadband stations around the Mediterranean at local-to-regional distances. This is an ideal test region due to its high attenuation, complex propagation, and availability of many moderate sized earthquakes. We find that a power law attenuation of the form Q(f) = Q{sub 0}f{sup Y} fit all the phases quite well over the 0.5 to 8 Hz band. At most stations, the measured apparent Q values are quite repeatable from event to event. Finding the attenuation function in this manner guarantees a close match between inferred source spectra from direct waves and coda techniques. This is important if coda and direct wave amplitudes are to produce consistent seismic results.« less

  6. Multi-Disciplinary Approach to Trace Contamination of Streams and Beaches

    USGS Publications Warehouse

    Nickles, James

    2008-01-01

    Concentrations of fecal-indicator bacteria in urban streams and ocean beaches in and around Santa Barbara occasionally can exceed public-health standards for recreation. The U.S. Geological Survey (USGS), working with the City of Santa Barbara, has used multi-disciplinary science to trace the sources of the bacteria. This research is helping local agencies take steps to improve recreational water quality. The USGS used an approach that combined traditional hydrologic and microbiological data, with state-of-the-art genetic, molecular, and chemical tracer analysis. This research integrated physical data on streamflow, ground water, and near-shore oceanography, and made extensive use of modern geophysical and isotopic techniques. Using those techniques, the USGS was able to evaluate the movement of water and the exchange of ground water with near-shore ocean water. The USGS has found that most fecal bacteria in the urban streams came from storm-drain discharges, with the highest concentrations occurring during storm flow. During low streamflow, the concentrations varied as much as three-fold, owing to variable contribution of non-point sources such as outdoor water use and urban runoff to streamflow. Fecal indicator bacteria along ocean beaches were from both stream discharge to the ocean and from non-point sources such as bird fecal material that accumulates in kelp and sand at the high-tide line. Low levels of human-specific Bacteroides, suggesting fecal material from a human source, were consistently detected on area beaches. One potential source, a local sewer line buried beneath the beach, was found not to be responsible for the fecal bacteria.

  7. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.

    PubMed

    Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.

  8. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization

    PubMed Central

    Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098

  9. Local measurements of the diffusion constant in multiple scattering media: Application to human trabecular bone imaging

    NASA Astrophysics Data System (ADS)

    Aubry, Alexandre; Derode, Arnaud; Padilla, Frédéric

    2008-03-01

    We present local measurements of the diffusion constant for ultrasonic waves undergoing multiple scattering. The experimental setup uses a coherent array of programmable transducers. By achieving Gaussian beamforming at emission and reception, an array of virtual sources and receivers located in the near field is constructed. A matrix treatment is proposed to separate the incoherent intensity from the coherent backscattering peak. Local measurements of the diffusion constant D are then achieved. This technique is applied to a real case: a sample of human trabecular bone for which the ultrasonic characterization of multiple scattering is an issue.

  10. Localization of quenches and mechanical disturbances in the Mu2e transport solenoid prototype using acoustic emission technique

    DOE PAGES

    Marchevsky, M.; Ambrosio, G.; Lamm, M.; ...

    2016-02-12

    Acoustic emission (AE) detection is a noninvasive technique allowing the localization of the mechanical events and quenches in superconducting magnets. Application of the AE technique is especially advantageous in situations where magnet integrity can be jeopardized by the use of voltage taps or inductive pickup coils. As the prototype module of the transport solenoid (TS) for the Mu2e experiment at Fermilab represents such a special case, we have developed a dedicated six-channel AE detection system and accompanying software aimed at localizing mechanical events during the coil cold testing. The AE sensors based on transversely polarized piezoceramic washers combined with cryogenicmore » preamplifiers were mounted at the outer surface of the solenoid aluminum shell, with a 60° angular step around the circumference. Acoustic signals were simultaneously acquired at a rate of 500 kS/s, prefiltered and sorted based on their arrival time. Next, based on the arrival timing, angular and axial coordinates of the AE sources within the magnet structure were calculated. Furthermore, we present AE measurement results obtained during cooldown, spot heater firing, and spontaneous quenching of the Mu2e TS module prototype and discuss their relevance for mechanical stability assessment and quench localization.« less

  11. Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging.

    PubMed

    Chaudhari, Abhijit J; Darvas, Felix; Bading, James R; Moats, Rex A; Conti, Peter S; Smith, Desmond J; Cherry, Simon R; Leahy, Richard M

    2005-12-07

    For bioluminescence imaging studies in small animals, it is important to be able to accurately localize the three-dimensional (3D) distribution of the underlying bioluminescent source. The spectrum of light produced by the source that escapes the subject varies with the depth of the emission source because of the wavelength-dependence of the optical properties of tissue. Consequently, multispectral or hyperspectral data acquisition should help in the 3D localization of deep sources. In this paper, we describe a framework for fully 3D bioluminescence tomographic image acquisition and reconstruction that exploits spectral information. We describe regularized tomographic reconstruction techniques that use semi-infinite slab or FEM-based diffusion approximations of photon transport through turbid media. Singular value decomposition analysis was used for data dimensionality reduction and to illustrate the advantage of using hyperspectral rather than achromatic data. Simulation studies in an atlas-mouse geometry indicated that sub-millimeter resolution may be attainable given accurate knowledge of the optical properties of the animal. A fixed arrangement of mirrors and a single CCD camera were used for simultaneous acquisition of multispectral imaging data over most of the surface of the animal. Phantom studies conducted using this system demonstrated our ability to accurately localize deep point-like sources and show that a resolution of 1.5 to 2.2 mm for depths up to 6 mm can be achieved. We also include an in vivo study of a mouse with a brain tumour expressing firefly luciferase. Co-registration of the reconstructed 3D bioluminescent image with magnetic resonance images indicated good anatomical localization of the tumour.

  12. Source apportionment of formaldehyde during TexAQS 2006 using a source-oriented chemical transport model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Li, Jingyi; Ying, Qi; Guven, Birnur Buzcu; Olaguer, Eduardo P.

    2013-02-01

    In this study, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model was developed and used to quantify the contributions of five major local emission source types in Southeast Texas (vehicles, industry, natural gas combustion, wildfires, biogenic sources), as well as upwind sources, to regional primary and secondary formaldehyde (HCHO) concentrations. Predicted HCHO concentrations agree well with observations at two urban sites (the Moody Tower [MT] site at the University of Houston and the Haden Road #3 [HRM-3] site operated by Texas Commission on Environmental Quality). However, the model underestimates concentrations at an industrial site (Lynchburg Ferry). Throughout most of Southeast Texas, primary HCHO accounts for approximately 20-30% of total HCHO, while the remaining portion is due to secondary HCHO (30-50%) and upwind sources (20-50%). Biogenic sources, natural gas combustion, and vehicles are important sources of primary HCHO in the urban Houston area, respectively, accounting for 10-20%, 10-30%, and 20-60% of total primary HCHO. Biogenic sources, industry, and vehicles are the top three sources of secondary HCHO, respectively, accounting for 30-50%, 10-30%, and 5-15% of overall secondary HCHO. It was also found that over 70% of PAN in the Houston area is due to upwind sources, and only 30% is formed locally. The model-predicted source contributions to HCHO at the MT generally agree with source apportionment results obtained from the Positive Matrix Factorization (PMF) technique.

  13. Aerosol Pollution from Small Combustors in a Village

    PubMed Central

    Zwozdziak, A.; Samek, L.; Sowka, I.; Furman, L.; Skrętowicz, M.

    2012-01-01

    Urban air pollution is widely recognized. Recently, there have been a few projects that examined air quality in rural areas (e.g., AUPHEP project in Austria, WOODUSE project in Denmark). Here we present the results within the International Cooperation Project RER/2/005 targeted at studying the effect of local combustion processes to air quality in the village of Brzezina in the countryside north-west of Wroclaw (south western Poland). We identified the potential emission sources and quantified their contributions. The ambient aerosol monitoring (PM10 and elemental concentrations) was performed during 4 measurement cycles, in summer 2009, 2010 and in winter 2010, 2011. Some receptor modeling techniques, factor analysis-multiple linear regression analysis (FA-MLRA) and potential source localization function (PSLF), have been used. Different types of fuel burning along with domestic refuse resulted in an increased concentration of PM10 particle mass, but also by an increased in various other compounds (As, Pb, Zn). Local combustion sources contributed up to 80% to PM10 mass in winter. The effect of other sources was small, from 6 to 20%, dependently on the season. Both PM10 and elemental concentrations in the rural settlement were comparable to concentrations at urban sites in summer and were much higher in winter, which can pose asignificant health risk to its inhabitants. PMID:22629226

  14. Characterizing local traffic contributions to particulate air pollution in street canyons using mobile monitoring techniques

    NASA Astrophysics Data System (ADS)

    Zwack, Leonard M.; Paciorek, Christopher J.; Spengler, John D.; Levy, Jonathan I.

    2011-05-01

    Traffic within urban street canyons can contribute significantly to ambient concentrations of particulate air pollution. In these settings, it is challenging to separate within-canyon source contributions from urban and regional background concentrations given the highly variable and complex emissions and dispersion characteristics. In this study, we used continuous mobile monitoring of traffic-related particulate air pollutants to assess the contribution to concentrations, above background, of traffic in the street canyons of midtown Manhattan. Concentrations of both ultrafine particles (UFP) and fine particles (PM 2.5) were measured at street level using portable instruments. Statistical modeling techniques accounting for autocorrelation were used to investigate the presence of spatial heterogeneity of pollutant concentrations as well as to quantify the contribution of within-canyon traffic sources. Measurements were also made within Central Park, to examine the impact of offsets from major roadways in this urban environment. On average, an approximate 11% increase in concentrations of UFP and 8% increase in concentrations of PM 2.5 over urban background was estimated during high-traffic periods in street canyons as opposed to low traffic periods. Estimates were 8% and 5%, respectively, after accounting for temporal autocorrelation. Within Central Park, concentrations were 40% higher than background (5% after accounting for temporal autocorrelation) within the first 100 m from the nearest roadway for UFP, with a smaller but statistically significant increase for PM 2.5. Our findings demonstrate the viability of a mobile monitoring protocol coupled with spatiotemporal modeling techniques in characterizing local source contributions in a setting with street canyons.

  15. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  16. Electrical source imaging of interictal spikes using multiple sparse volumetric priors for presurgical epileptogenic focus localization

    PubMed Central

    Strobbe, Gregor; Carrette, Evelien; López, José David; Montes Restrepo, Victoria; Van Roost, Dirk; Meurs, Alfred; Vonck, Kristl; Boon, Paul; Vandenberghe, Stefaan; van Mierlo, Pieter

    2016-01-01

    Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP) approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i) an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii) an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii) an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time epochs were in the same range as the LORETA and ECD techniques. We found distances smaller than 23 mm, with robust results for all the patients. For the finite difference models, we found that the distances to the resection border for the MSVP inversions of the full spike time epochs were generally smaller compared to the MSVP inversions of the time epochs before the spike peak. The results also suggest that the inversions using the finite difference models resulted in slightly smaller distances to the resection border compared to the spherical models. The results we obtained are promising because the MSVP approach allows to study the network of the estimated source-intensities and allows to characterize the spatial extent of the underlying sources. PMID:26958464

  17. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE PAGES

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  18. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  19. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    NASA Astrophysics Data System (ADS)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  20. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    NASA Astrophysics Data System (ADS)

    Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.

    2011-12-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  1. Data Sources for Administrators Attempting Local Reforms: A Preliminary Study of the Power Structure of a Selected Community in East Tennessee.

    ERIC Educational Resources Information Center

    Allen, Charlie Joe

    Using techniques of the reputational method to study community power structure, this report identifies components of power structure in a Tennessee school district, demonstrates that proven methodologies can facilitate educational leaders' reform efforts, and serves as a pilot study for further investigation. Researchers investigated the district…

  2. Cataloging tremor at Kilauea Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Thelen, W. A.; Wech, A.

    2013-12-01

    Tremor is a ubiquitous seismic feature on Kilauea volcano, which emanates from at least three distinct sources. At depth, intermittent tremor and earthquakes thought to be associated with the underlying plumbing system of Kilauea (Aki and Koyanagi, 1981) occurs approximately 40 km below and 40 km SW of the summit. At the summit of the volcano, nearly continuous tremor is recorded close to a persistently degassing lava lake, which has been present since 2008. Much of this tremor is correlated with spattering at the lake surface, but tremor also occurs in the absence of spattering, and was observed at the summit of the volcano prior to the appearance of the lava lake, predominately in association with inflation/deflation events. The third known source of tremor is in the area of Pu`u `O`o, a vent that has been active since 1983. The exact source location and depth is poorly constrained for each of these sources. Consistently tracking the occurrence and location of tremor in these areas through time will improve our understanding of the plumbing geometry beneath Kilauea volcano and help identify precursory patterns in tremor leading to changes in eruptive activity. The continuous and emergent nature of tremor precludes the use of traditional earthquake techniques for automatic detection and location of seismicity. We implement the method of Wech and Creager (2008) to both detect and localize tremor seismicity in the three regions described above. The technique uses an envelope cross-correlation method in 5-minute windows that maximizes tremor signal coherency among seismic stations. The catalog is currently being built in near-realtime, with plans to extend the analysis to the past as time and continuous data availability permits. This automated detection and localization method has relatively poor depth constraints due to the construction of the envelope function. Nevertheless, the epicenters distinguish activity among the different source regions and serve as starting points for more sophisticated location techniques using cross-correlation and/or amplitude-based locations. The resulting timelines establish a quantitative baseline of behavior for each source to better understand and forecast Kilauea activity.

  3. Obsessive-compulsive dimension localized using low-resolution brain electromagnetic tomography (LORETA).

    PubMed

    Sherlin, Leslie; Congedo, Marco

    2005-10-21

    Electroencephalographic mapping techniques have been used to show differences between normal subjects and those diagnosed with various mental disorders. To date, there is no other research using the techniques of low-resolution brain electromagnetic tomography (LORETA) with the obsessive-compulsive disorder (OCD) population. The current investigation compares current source density measures of persons with OCD symptoms to an age-matched control group. The main finding is excess current source density in the Beta frequencies in the cingulate gyrus. This Beta activity is primarily located in the middle cingulate gyrus as well as adjacent frontal parieto-occipital regions. Lower frequency Beta is prominent more anteriorly in the cingulate gyrus whereas higher frequency Beta is seen more posteriorly. These preliminary findings indicate the utility of LORETA as a clinical and diagnostic tool.

  4. Laser Scanning Systems and Techniques in Rockfall Source Identification and Risk Assessment: A Critical Review

    NASA Astrophysics Data System (ADS)

    Fanos, Ali Mutar; Pradhan, Biswajeet

    2018-04-01

    Rockfall poses risk to people, their properties and to transportation ways in mountainous and hilly regions. This catastrophe shows various characteristics such as vast distribution, sudden occurrence, variable magnitude, strong fatalness and randomicity. Therefore, prediction of rockfall phenomenon both spatially and temporally is a challenging task. Digital Terrain model (DTM) is one of the most significant elements in rockfall source identification and risk assessment. Light detection and ranging (LiDAR) is the most advanced effective technique to derive high-resolution and accurate DTM. This paper presents a critical overview of rockfall phenomenon (definition, triggering factors, motion modes and modeling) and LiDAR technique in terms of data pre-processing, DTM generation and the factors that can be obtained from this technique for rockfall source identification and risk assessment. It also reviews the existing methods that are utilized for the evaluation of the rockfall trajectories and their characteristics (frequency, velocity, bouncing height and kinetic energy), probability, susceptibility, hazard and risk. Detail consideration is given on quantitative methodologies in addition to the qualitative ones. Various methods are demonstrated with respect to their application scales (local and regional). Additionally, attention is given to the latest improvement, particularly including the consideration of the intensity of the phenomena and the magnitude of the events at chosen sites.

  5. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  6. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  7. MUSIC for localization of thunderstorm cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Lewis, P.S.; Rynne, T.M.

    1993-12-31

    Lightning represents an event detectable optically, electrically, and acoustically, and several systems are already in place to monitor such activity. Unfortunately, such detection of lightning can occur too late, since operations need to be protected in advance of the first lightning strike. Additionally, the bolt itself can traverse several kilometers before striking the ground, leaving a large region of uncertainty as to the center of the storm and its possible strike regions. NASA Kennedy Space Center has in place an array of electric field mills that monitor the (effectively) DC electric field. Prior to the first lightning strike, the surfacemore » electric fields rise as the storm generator within a thundercloud begins charging. Extending methods we developed for an analogous source localization problem in mangnetoencephalography, we present Cramer-Rao lower bounds and MUSIC scans for fitting a point-charge source model to the electric field mill data. Such techniques can allow for the identification and localization of charge centers in cloud structures.« less

  8. Localization with Sparse Acoustic Sensor Network Using UAVs as Information-Seeking Data Mules

    DTIC Science & Technology

    2013-05-01

    technique to differentiate among several sources. 2.2. AoA Estimation AoA Models. The kth of NAOA AoA sensors produces an angular measurement modeled...squares sense. θ̂ = arg min φ 3∑ i=1 ( ̂τi0 − eTφ ri )2 (9) The minimization was done by gridding the one-dimensional angular space and finding the optimum...Latitude E5500 laptop running FreeBSD and custom Java applications to process and store the raw audio signals. Power Source: The laptop was powered for an

  9. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  10. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  11. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  12. Towards 3D Noise Source Localization using Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.

    2017-12-01

    The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.

  13. Theoretical considerations for mapping activation in human cardiac fibrillation

    NASA Astrophysics Data System (ADS)

    Rappel, Wouter-Jan; Narayan, Sanjiv M.

    2013-06-01

    Defining mechanisms for cardiac fibrillation is challenging because, in contrast to other arrhythmias, fibrillation exhibits complex non-repeatability in spatiotemporal activation but paradoxically exhibits conserved spatial gradients in rate, dominant frequency, and electrical propagation. Unlike animal models, in which fibrillation can be mapped at high spatial and temporal resolution using optical dyes or arrays of contact electrodes, mapping of cardiac fibrillation in patients is constrained practically to lower resolutions or smaller fields-of-view. In many animal models, atrial fibrillation is maintained by localized electrical rotors and focal sources. However, until recently, few studies had revealed localized sources in human fibrillation, so that the impact of mapping constraints on the ability to identify rotors or focal sources in humans was not described. Here, we determine the minimum spatial and temporal resolutions theoretically required to detect rigidly rotating spiral waves and focal sources, then extend these requirements for spiral waves in computer simulations. Finally, we apply our results to clinical data acquired during human atrial fibrillation using a novel technique termed focal impulse and rotor mapping (FIRM). Our results provide theoretical justification and clinical demonstration that FIRM meets the spatio-temporal resolution requirements to reliably identify rotors and focal sources for human atrial fibrillation.

  14. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  15. Instrumentation for localized superconducting cavity diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conway, Z. A.; Ge, M.; Iwashita, Y.

    2017-01-12

    Superconducting accelerator cavities are now routinely operated at levels approaching the theoretical limit of niobium. To achieve these operating levels more information than is available from the RF excitation signal is required to characterize and determine fixes for the sources of performance limitations. This information is obtained using diagnostic techniques which complement the analysis of the RF signal. In this paper we describe the operation and select results from three of these diagnostic techniques: the use of large scale thermometer arrays, second sound wave defect location and high precision cavity imaging with the Kyoto camera.

  16. Localization and cooperative communication methods for cognitive radio

    NASA Astrophysics Data System (ADS)

    Duval, Olivier

    We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS) condition, increasing the localization error, even when the AOA estimate is accurate. We present a real-time localization solver (RTLS) for time-of-arrival (TOA) estimates using ray-tracing methods on the map of the geometry of walls and compare its performance with classical TOA trilateration localization methods. Extensive simulations and field trials for indoor environments show that our method increases the coverage area from 1.9% of the floor to 82.3 % and the accuracy by a 10-fold factor when compared with trilateration. We implemented our ray tracing model in C++ using the CGAL computational geometry algorithm library. We illustrate the real-time property of our RTLS that performs most ray tracing tasks in a preprocessing phase with time and space complexity analyses and profiling of our software.

  17. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  18. Reconstructing the sky location of gravitational-wave detected compact binary systems: Methodology for testing and comparison

    NASA Astrophysics Data System (ADS)

    Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.

    2014-04-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.

  19. Fatigue crack monitoring with coupled piezoelectric film acoustic emission sensors

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang

    Fatigue-induced cracking is a commonly seen problem in civil infrastructures reaching their original design life. A number of high-profile accidents have been reported in the past that involved fatigue damage in structures. Such incidences often happen without prior warnings due to lack of proper crack monitoring technique. In order to detect and monitor the fatigue crack, acoustic emission (AE) technique, has been receiving growing interests recently. AE can provide continuous and real-time monitoring data on damage progression in structures. Piezoelectric film AE sensor measures stress-wave induced strain in ultrasonic frequency range and its feasibility for AE signal monitoring has been demonstrated recently. However, extensive work in AE monitoring system development based on piezoelectric film AE sensor and sensor characterization on full-scale structures with fatigue cracks, have not been done. A lack of theoretical formulations for understanding the AE signals also hinders the use of piezoelectric film AE sensors. Additionally, crack detection and source localization with AE signals is a very important area yet to be explored for this new type of AE sensor. This dissertation presents the results of both analytical and experimental study on the signal characteristics of surface stress-wave induced AE strain signals measured by piezoelectric film AE sensors in near-field and an AE source localization method based on sensor couple theory. Based on moment tensor theory, generalized expression for AE strain signal is formulated. A special case involving the response of piezoelectric film AE sensor to surface load is also studied, which could potentially be used for sensor calibration of this type of sensor. A new concept of sensor couple theory based AE source localization technique is proposed and validated with both simulated and experimental data from fatigue test and field monitoring. Two series of fatigue tests were conducted to perform fatigue crack monitoring on large-scale steel test specimens using piezoelectric film AE sensors. Continuous monitoring of fatigue crack growth in steel structures is demonstrated in these fatigue test specimens. The use of piezoelectric film AE sensor for field monitoring of existing fatigue crack is also demonstrated in a real steel I-girder bridge located in Maryland. The sensor couple theory based AE source localization is validated using a limited number of piezoelectric film AE sensor data from both fatigue test specimens and field monitoring bridge. Through both laboratory fatigue test and field monitoring of steel structures with active fatigue cracks, the signal characteristics of piezoelectric film AE sensor have been studied in real-world environment.

  20. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  1. Mapping Epileptic Activity: Sources or Networks for the Clinicians?

    PubMed Central

    Pittau, Francesca; Mégevand, Pierre; Sheybani, Laurent; Abela, Eugenio; Grouiller, Frédéric; Spinelli, Laurent; Michel, Christoph M.; Seeck, Margitta; Vulliemoz, Serge

    2014-01-01

    Epileptic seizures of focal origin are classically considered to arise from a focal epileptogenic zone and then spread to other brain regions. This is a key concept for semiological electro-clinical correlations, localization of relevant structural lesions, and selection of patients for epilepsy surgery. Recent development in neuro-imaging and electro-physiology and combinations, thereof, have been validated as contributory tools for focus localization. In parallel, these techniques have revealed that widespread networks of brain regions, rather than a single epileptogenic region, are implicated in focal epileptic activity. Sophisticated multimodal imaging and analysis strategies of brain connectivity patterns have been developed to characterize the spatio-temporal relationships within these networks by combining the strength of both techniques to optimize spatial and temporal resolution with whole-brain coverage and directional connectivity. In this paper, we review the potential clinical contribution of these functional mapping techniques as well as invasive electrophysiology in human beings and animal models for characterizing network connectivity. PMID:25414692

  2. Is it safe to use local anesthesia with adrenaline in hand surgery? WALANT technique.

    PubMed

    Pires Neto, Pedro José; Moreira, Leonardo de Andrade; Las Casas, Priscilla Pires de

    2017-01-01

    In the past it was taught that local anesthetic should not be used with adrenaline for procedures in the extremities. This dogma is transmitted from generation to generation. Its truth has not been questioned, nor the source of the doubt. In many situations the benefit of use was not understood, because it was often thought that it was not necessary to prolong the anesthetic effect, since the procedures were mostly of short duration. After the disclosure of studies of Canadian surgeons, came to understand that the benefits went beyond the time of anesthesia. The WALANT technique allows a surgical field without bleeding, possibility of information exchange with the patient during the procedure, reduction of waste material, reduction of costs, and improvement of safety. Thus, after passing through the initial phase of the doubts in the use of this technique, the authors verified its benefits and the patients' satisfaction in being able to immediately return home after the procedures.

  3. DC Microgrids–Part I: A Review of Control Strategies and Stabilization Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragicevic, Tomislav; Lu, Xiaonan; Vasquez, Juan

    2015-01-01

    This paper presents a review of control strategies, stability analysis, and stabilization techniques for dc microgrids (MGs). Overall control is systematically classified into local and coordinated control levels according to respective functionalities in each level. As opposed to local control, which relies only on local measurements, some line of communication between units needs to be made available in order to achieve the coordinated control. Depending on the communication method, three basic coordinated control strategies can be distinguished, i.e., decentralized, centralized, and distributed control. Decentralized control can be regarded as an extension of the local control since it is also basedmore » exclusively on local measurements. In contrast, centralized and distributed control strategies rely on digital communication technologies. A number of approaches using these three coordinated control strategies to achieve various control objectives are reviewed in this paper. Moreover, properties of dc MG dynamics and stability are discussed. This paper illustrates that tightly regulated point-of-load converters tend to reduce the stability margins of the system since they introduce negative impedances, which can potentially oscillate with lightly damped power supply input filters. It is also demonstrated that how the stability of the whole system is defined by the relationship of the source and load impedances, referred to as the minor loop gain. Several prominent specifications for the minor loop gain are reviewed. Finally, a number of active stabilization techniques are presented.« less

  4. Non-contact local temperature measurement inside an object using an infrared point detector

    NASA Astrophysics Data System (ADS)

    Hisaka, Masaki

    2017-04-01

    Local temperature measurement in deep areas of objects is an important technique in biomedical measurement. We have investigated a non-contact method for measuring temperature inside an object using a point detector for infrared (IR) light. An IR point detector with a pinhole was constructed and the radiant IR light emitted from the local interior of the object is photodetected only at the position of pinhole located in imaging relation. We measured the thermal structure of the filament inside the miniature bulb using the IR point detector, and investigated the temperature dependence at approximately human body temperature using a glass plate positioned in front of the heat source.

  5. Squids in the Study of Cerebral Magnetic Field

    NASA Astrophysics Data System (ADS)

    Romani, G. L.; Narici, L.

    The following sections are included: * INTRODUCTION * HISTORICAL OVERVIEW * NEUROMAGNETIC FIELDS AND AMBIENT NOISE * DETECTORS * Room temperature sensors * SQUIDs * DETECTION COILS * Magnetometers * Gradiometers * Balancing * Planar gradiometers * Choice of the gradiometer parameters * MODELING * Current pattern due to neural excitations * Action potentials and postsynaptic currents * The current dipole model * Neural population and detected fields * Spherically bounded medium * SPATIAL CONFIGURATION OF THE SENSORS * SOURCE LOCALIZATION * Localization procedure * Experimental accuracy and reproducibility * SIGNAL PROCESSING * Analog Filtering * Bandpass filters * Line rejection filters * DATA ANALYSIS * Analysis of evoked/event-related responses * Simple average * Selected average * Recursive techniques * Similarity analysis * Analysis of spontaneous activity * Mapping and localization * EXAMPLES OF NEUROMAGNETIC STUDIES * Neuromagnetic measurements * Studies on the normal brain * Clinical applications * Epilepsy * Tinnitus * CONCLUSIONS * ACKNOWLEDGEMENTS * REFERENCES

  6. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  7. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  8. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.

  9. Application of the MCNP5 code to the Modeling of vaginal and intra-uterine applicators used in intracavitary brachytherapy: a first approach

    NASA Astrophysics Data System (ADS)

    Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.

    2008-02-01

    Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnapauff, Dirk, E-mail: dirk.schnapauff@charite.de; Denecke, Timm; Grieser, Christian

    Purpose: This study was designed to investigate the clinical outcome of patients with irresectable, intrahepatic cholangiocarcinoma (IHC) treated with computed tomography (CT)-guided HDR-brachytherapy (CT-HDRBT) for local tumor ablation.MethodFifteen consecutive patients with histologically proven cholangiocarcinoma were selected for this retrospective study. Patients were treated by high-dose-rate internal brachytherapy (HDRBT) using an Iridium-192 source in afterloading technique through CT-guided percutaneous placed catheters. A total of 27 brachytherapy treatments were performed in these patients between 2006 and 2009. Median tumor enclosing target dose was 20 Gy, and mean target volume of the radiated tumors was 131 ({+-} 90) ml (range, 10-257 ml). Follow-upmore » consisted of clinical visits and magnetic resonance imaging of the liver every third month. Statistical evaluation included survival analysis using the Kaplan-Meier method. Results: After a median follow-up of 18 (range, 1-27) months after local ablation, 6 of the 15 patients are still alive; 4 of them did not get further chemotherapy and are regarded as disease-free. The reached median local tumor control was 10 months; median local tumor control, including repetitive local ablation, was 11 months. Median survival after local ablation was 14 months and after primary diagnosis 21 months. Conclusion: In view of current clinical data on the clinical outcome of cholangiocarcinoma, locally ablative treatment with CT-HDRBT represents a promising and safe technique for patients who are not eligible for tumor resection.« less

  11. Ray-based acoustic localization of cavitation in a highly reverberant environment.

    PubMed

    Chang, Natasha A; Dowling, David R

    2009-05-01

    Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.

  12. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  13. Real-time analysis application for identifying bursty local areas related to emergency topics.

    PubMed

    Sakai, Tatsuhiro; Tamura, Keiichi

    2015-01-01

    Since social media started getting more attention from users on the Internet, social media has been one of the most important information source in the world. Especially, with the increasing popularity of social media, data posted on social media sites are rapidly becoming collective intelligence, which is a term used to refer to new media that is displacing traditional media. In this paper, we focus on geotagged tweets on the Twitter site. These geotagged tweets are referred to as georeferenced documents because they include not only a short text message, but also the documents' posting time and location. Many researchers have been tackling the development of new data mining techniques for georeferenced documents to identify and analyze emergency topics, such as natural disasters, weather, diseases, and other incidents. In particular, the utilization of geotagged tweets to identify and analyze natural disasters has received much attention from administrative agencies recently because some case studies have achieved compelling results. In this paper, we propose a novel real-time analysis application for identifying bursty local areas related to emergency topics. The aim of our new application is to provide new platforms that can identify and analyze the localities of emergency topics. The proposed application is composed of three core computational intelligence techniques: the Naive Bayes classifier technique, the spatiotemporal clustering technique, and the burst detection technique. Moreover, we have implemented two types of application interface: a Web application interface and an android application interface. To evaluate the proposed application, we have implemented a real-time weather observation system embedded the proposed application. we used actual crawling geotagged tweets posted on the Twitter site. The weather observation system successfully detected bursty local areas related to observed emergency weather topics.

  14. Multi-scale monitoring of a marine geologic methane source in the Santa Barbara Channel using imaging spectrometry, ARCTAS-CARB in situ sampling and coastal hourly total hydrocarbon measurements

    NASA Astrophysics Data System (ADS)

    Bradley, E. S.; Leifer, I.; Roberts, D.; Dennison, P. E.; Margolis, J.; Moritsch, M.; Diskin, G. S.; Sachse, G. W.

    2009-12-01

    The Coal Oil Point (COP) hydrocarbon seep field off the coast of Santa Barbara, CA is one of the most active and best-studied marine geologic methane sources in the world and contributes to elevated terrestrial methane concentrations downwind. In this study, we investigate the spatiotemporal variability of this local source and the influence of meteorological conditions on transport and concentration. A methane plume emanating from Trilogy Seep was mapped with the Airborne Visible Infrared Imaging Spectrometer at a 7.5 m resolution with a short-wave infrared band ratio technique. This structure agrees with the local wind speed and direction and is orthogonal to the surface currents. ARCTAS-CARB aircraft in situ sampling of lower-troposphere methane is compared to sub-hour total hydrocarbon concentration (THC) measurements from the Santa Barbara Air Pollution Control District (SBAPCD) station located near COP. Hourly SBAPCD THC values from 1980-2008 demonstrate a decrease in seep source strength until the late 1990s, followed by a consistent increase. The occurrence of elevated SBAPCD THC values for onshore wind conditions as well as numerous positive outliers as high as 17 ppm suggests that seep field emissions are both quasi-steady state and transient, direct (bubble) and diffuse (outgassing). As demonstrated for the COP seeps, the combination of imaging spectrometry, aircraft in situ sampling, and ground-based monitoring provides a powerful approach for understanding local methane sources and transport processes.

  15. Comparative study of shear wave-based elastography techniques in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zvietcovich, Fernando; Rolland, Jannick P.; Yao, Jianing; Meemon, Panomsak; Parker, Kevin J.

    2017-03-01

    We compare five optical coherence elastography techniques able to estimate the shear speed of waves generated by one and two sources of excitation. The first two techniques make use of one piezoelectric actuator in order to produce a continuous shear wave propagation or a tone-burst propagation (TBP) of 400 Hz over a gelatin tissue-mimicking phantom. The remaining techniques utilize a second actuator located on the opposite side of the region of interest in order to create three types of interference patterns: crawling waves, swept crawling waves, and standing waves, depending on the selection of the frequency difference between the two actuators. We evaluated accuracy, contrast to noise ratio, resolution, and acquisition time for each technique during experiments. Numerical simulations were also performed in order to support the experimental findings. Results suggest that in the presence of strong internal reflections, single source methods are more accurate and less variable when compared to the two-actuator methods. In particular, TBP reports the best performance with an accuracy error <4.1%. Finally, the TBP was tested in a fresh chicken tibialis anterior muscle with a localized thermally ablated lesion in order to evaluate its performance in biological tissue.

  16. Case Studies in the Organizational Communication Course: Applying Textbook Concepts to Real Life Organizations.

    ERIC Educational Resources Information Center

    Byers, Peggy Yuhas

    This paper urges the use of case study discussions in the organizational communication class as an effective instructional technique. The paper presents a variety of formats for bringing case studies to life for students in the organizational communication course. It first discusses sources for case studies such as the World Wide Web, local and…

  17. A distributed transmit beamforming synchronization strategy for multi-element radar systems

    NASA Astrophysics Data System (ADS)

    Xiao, Manlin; Li, Xingwen; Xu, Jikang

    2017-02-01

    The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.

  18. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  19. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  20. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  1. Beamforming array techniques for acoustic emission monitoring of large concrete structures

    NASA Astrophysics Data System (ADS)

    McLaskey, Gregory C.; Glaser, Steven D.; Grosse, Christian U.

    2010-06-01

    This paper introduces a novel method of acoustic emission (AE) analysis which is particularly suited for field applications on large plate-like reinforced concrete structures, such as walls and bridge decks. Similar to phased-array signal processing techniques developed for other non-destructive evaluation methods, this technique adapts beamforming tools developed for passive sonar and seismological applications for use in AE source localization and signal discrimination analyses. Instead of relying on the relatively weak P-wave, this method uses the energy-rich Rayleigh wave and requires only a small array of 4-8 sensors. Tests on an in-service reinforced concrete structure demonstrate that the azimuth of an artificial AE source can be determined via this method for sources located up to 3.8 m from the sensor array, even when the P-wave is undetectable. The beamforming array geometry also allows additional signal processing tools to be implemented, such as the VESPA process (VElocity SPectral Analysis), whereby the arrivals of different wave phases are identified by their apparent velocity of propagation. Beamforming AE can reduce sampling rate and time synchronization requirements between spatially distant sensors which in turn facilitates the use of wireless sensor networks for this application.

  2. Complex time series analysis of PM10 and PM2.5 for a coastal site using artificial neural network modelling and k-means clustering

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.

    2014-09-01

    This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.

  3. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  4. Local wisdom of Ngata Toro community in utilizing forest resources as a learning source of biology

    NASA Astrophysics Data System (ADS)

    Yuliana, Sriyati, Siti; Sanjaya, Yayan

    2017-08-01

    Indonesian society is a pluralistic society with different cultures and local potencies that exist in each region. Some of local community still adherethe tradition from generation to generation in managing natural resources wisely. The application of the values of local wisdom is necessary to teach back to student to be more respect the culture and local potentials in the region. There are many ways developing student character by exploring local wisdom and implementing them as a learning resources. This study aims at revealing the values of local wisdom Ngata Toro indigenous people of Central Sulawesi Province in managing forest as a source of learning biology. This research was conducted by in-depth interviews, participant non-observation, documentation studies, and field notes. The data were analyzed with triangulation techniques by using a qualitative interaction analysis that is data collection, data reduction, and data display. Ngata Toro local community manage forest by dividing the forest into several zones, those arewana ngkiki, wana, pangale, pahawa pongko, oma, and balingkea accompanied by rules in the management of result-based forest conservation and sustainable utilization. By identifying the purpose of zonation and regulation of the forest, such values as the value of environmental conservation, balance value, sustainable value, and the value of mutual cooperation. These values are implemented as a biological learning resource which derived from the competences standard of analyze the utilization and conservation of the environment.

  5. Fine particle receptor modeling in the atmosphere of Mexico City.

    PubMed

    Vega, Elizabeth; Lowenthal, Douglas; Ruiz, Hugo; Reyes, Elizabeth; Watson, John G; Chow, Judith C; Viana, Mar; Querol, Xavier; Alastuey, Andrés

    2009-12-01

    Source apportionment analyses were carried out by means of receptor modeling techniques to determine the contribution of major fine particulate matter (PM2.5) sources found at six sites in Mexico City. Thirty-six source profiles were determined within Mexico City to establish the fingerprints of particulate matter sources. Additionally, the profiles under the same source category were averaged using cluster analysis and the fingerprints of 10 sources were included. Before application of the chemical mass balance (CMB), several tests were carried out to determine the best combination of source profiles and species used for the fitting. CMB results showed significant spatial variations in source contributions among the six sites that are influenced by local soil types and land use. On average, 24-hr PM2.5 concentrations were dominated by mobile source emissions (45%), followed by secondary inorganic aerosols (16%) and geological material (17%). Industrial emissions representing oil combustion and incineration contributed less than 5%, and their contribution was higher at the industrial areas of Tlalnepantla (11%) and Xalostoc (8%). Other sources such as cooking, biomass burning, and oil fuel combustion were identified at lower levels. A second receptor model (principal component analysis, [PCA]) was subsequently applied to three of the monitoring sites for comparison purposes. Although differences were obtained between source contributions, results evidence the advantages of the combined use of different receptor modeling techniques for source apportionment, given the complementary nature of their results. Further research is needed in this direction to reach a better agreement between the estimated source contributions to the particulate matter mass.

  6. Dynamic Response of a Magnetized Plasma to AN External Source: Application to Space and Solid State Plasmas

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-Bei

    This dissertation examines the dynamic response of a magnetoplasma to an external time-dependent current source. To achieve this goal a new method which combines analytic and numerical techniques to study the dynamic response of a 3-D magnetoplasma to a time-dependent current source imposed across the magnetic field was developed. The set of the cold electron and/or ion plasma equations and Maxwell's equations are first solved analytically in (k, omega)^ace; inverse Laplace and 3 -D complex Fast Fourier Transform (FFT) techniques are subsequently used to numerically transform the radiation fields and plasma currents from the (k, omega) ^ace to the (r, t) space. The dynamic responses of the electron plasma and of the compensated two-component plasma to external current sources are studied separately. The results show that the electron plasma responds to a time -varying current source imposed across the magnetic field by exciting whistler/helicon waves and forming of an expanding local current loop, induced by field aligned plasma currents. The current loop consists of two anti-parallel field-aligned current channels concentrated at the ends of the imposed current and a cross-field current region connecting these channels. The latter is driven by an electron Hall drift. A compensated two-component plasma responds to the same current source as following: (a) For slow time scales tau > Omega_sp{i}{-1} , it generates Alfven waves and forms a non-local current loop in which the ion polarization currents dominate the cross-field current; (b) For fast time scales tau < Omega_sp{i}{-1} , the dynamic response of the compensated two-component plasma is the same as that of the electron plasma. The characteristics of the current closure region are determined by the background plasma density, the magnetic field and the time scale of the current source. This study has applications to a diverse range of space and solid state plasma problems. These problems include current closure in emf inducing tethered satellite systems (TSS), generation of ELF/VLF waves by ionospheric heating, current closure and quasineutrality in thin magnetopause transitions, and short electromagnetic pulse generation in solid state plasmas. The cross-field current in TSS builds up on a time scale corresponding to the whistler waves and results in local current closure. Amplitude modulated HF ionospheric heating generates ELF/VLF waves by forming a horizontal magnetic dipole. The dipole is formed by the current closure in the modified region. For thin transition the time-dependent cross-field polarization field at the magnetopause could be neutralized by the formation of field aligned current loops that close by a cross-field electron Hall current. A moving current source in a solid state plasma results in microwave emission if the speed of the source exceeds the local phase velocity of the helicon or Alfven waves. Detailed analysis of the above problems is presented in the thesis.

  7. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  8. Is the difference between chemical and numerical estimates of baseflow meaningful?

    NASA Astrophysics Data System (ADS)

    Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald

    2014-05-01

    Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  9. Magnetoencephalography recording and analysis.

    PubMed

    Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy

    2014-03-01

    Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.

  10. Laser-assisted solar cell metallization processing

    NASA Technical Reports Server (NTRS)

    Dutta, S.

    1984-01-01

    Laser-assisted processing techniques utilized to produce the fine line, thin metal grid structures that are required to fabricate high efficiency solar cells are examined. Two basic techniques for metal deposition are investigated; (1) photochemical decomposition of liquid or gas phase organometallic compounds utilizing either a focused, CW ultraviolet laser (System 1) or a mask and ultraviolet flood illumination, such as that provided by a repetitively pulsed, defocused excimer laser (System 2), for pattern definition, and (2) thermal deposition of metals from organometallic solutions or vapors utilizing a focused, CW laser beam as a local heat source to draw the metallization pattern.

  11. Automatic streak endpoint localization from the cornerness metric

    NASA Astrophysics Data System (ADS)

    Sease, Brad; Flewelling, Brien; Black, Jonathan

    2017-05-01

    Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.

  12. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  13. Evidence for expression of eosinophil-associated IL-12 messenger RNA and immunoreactivity in bronchial asthma.

    PubMed

    Nutku, E; Gounni, A S; Olivenstein, R; Hamid, Q

    2000-08-01

    Eosinophils are a source of cytokines within the airways of asthmatic individuals that may exert an important immunoregulatory influence. We examined IL-12 messenger (m)RNA and protein expression in eosinophils from peripheral blood and bronchoalveolar lavage (BAL) fluid obtained from subjects with atopic asthma (n = 7), patients with chronic bronchitis (n = 5), and nonatopic healthy control subjects (n = 7). To further define this IL-12(+) population of eosinophils for the expression of other cytokines, we colocalized IL-12 and IL-5 within the peripheral blood eosinophils. To detect IL-12 mRNA and protein expression, we used in situ hybridization and immunocytochemistry techniques. The double-immunocytochemistry technique was used to localize IL-12 protein to BAL eosinophils and to colocalize IL-5 and IL-12 in peripheral blood eosinophils. IL-12 mRNA and immunoreactive protein were localized to peripheral blood eosinophils. BAL fluid-derived eosinophils from asthmatic subjects were also reactive to IL-12. The percentage of peripheral blood eosinophils expressing mRNA for IL-12 was significantly lower in asthmatic subjects compared with that found in eosinophils obtained from patients with chronic bronchitis (P<.001) and control patients (P <.05). Colocalization studies demonstrated that the percentages of IL-12(+) eosinophils that are also IL-5(+) were 72% in asthmatic subjects and only 11% in control subjects (P<.001). These results suggest that eosinophils are a potential source of IL-12. Eosinophil-derived IL-12 may contribute and modulate the local allergic inflammatory responses.

  14. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.

    PubMed

    Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C

    2017-08-15

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation

    PubMed Central

    Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.

    2018-01-01

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130

  16. Research on Localization Algorithms Based on Acoustic Communication for Underwater Sensor Networks

    PubMed Central

    Fan, Liying; Wu, Shan; Yan, Xueting

    2017-01-01

    The water source, as a significant body of the earth, with a high value, serves as a hot topic to study Underwater Sensor Networks (UWSNs). Various applications can be realized based on UWSNs. Our paper mainly concentrates on the localization algorithms based on the acoustic communication for UWSNs. An in-depth survey of localization algorithms is provided for UWSNs. We first introduce the acoustic communication, network architecture, and routing technique in UWSNs. The localization algorithms are classified into five aspects, namely, computation algorithm, spatial coverage, range measurement, the state of the nodes and communication between nodes that are different from all other survey papers. Moreover, we collect a lot of pioneering papers, and a comprehensive comparison is made. In addition, some challenges and open issues are raised in our paper. PMID:29301369

  17. Plasmonic superfocusing on metallic tips for near-field optical imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Neacsu, Catalin C.; Olmon, Rob; Berweger, Samuel; Kappus, Alexandria; Kirchner, Friedrich; Ropers, Claus; Saraf, Lax; Raschke, Markus B.

    2008-03-01

    Realization of localized light sources through nonlocal excitation is important in the context of plasmon photonics, molecular sensing, and in particular near-field optical techniques. Here, the efficient conversion of propagating surface plasmons, launched on the shaft of a scanning probe tip, into localized plasmon at the apex provides a true nanoconfined light source. Focused ion beam milling is used to generate periodic surface nanostructures on the tip shaft that allow for tailoring the plasmon excitation. Using ultrashort visible and mid-IR transients the dynamics of the propagation and subsequent scattered emission is characterized. The strong field enhancement and spatial field confinement at the apex is demonstrated studying the coupling of the tip in near-field interaction with a flat sample surface. It is used in scattering near-field spectroscopic imaging (s-SNOM) to probe surface nanostructures with spatial resolution down to 10 nm.

  18. Source localization of brain activity using helium-free interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dammers, Jürgen, E-mail: J.Dammers@fz-juelich.de; Chocholacs, Harald; Eich, Eberhard

    2014-05-26

    To detect extremely small magnetic fields generated by the human brain, currently all commercial magnetoencephalography (MEG) systems are equipped with low-temperature (low-T{sub c}) superconducting quantum interference device (SQUID) sensors that use liquid helium for cooling. The limited and increasingly expensive supply of helium, which has seen dramatic price increases recently, has become a real problem for such systems and the situation shows no signs of abating. MEG research in the long run is now endangered. In this study, we report a MEG source localization utilizing a single, highly sensitive SQUID cooled with liquid nitrogen only. Our findings confirm that localizationmore » of neuromagnetic activity is indeed possible using high-T{sub c} SQUIDs. We believe that our findings secure the future of this exquisitely sensitive technique and have major implications for brain research and the developments of cost-effective multi-channel, high-T{sub c} SQUID-based MEG systems.« less

  19. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    PubMed

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Targeted Nanoparticle Thermometry: A Method to Measure Local Temperature at the Nanoscale Point Where Water Vapor Nucleation Occurs.

    PubMed

    Alaulamie, Arwa A; Baral, Susil; Johnson, Samuel C; Richardson, Hugh H

    2017-01-01

    An optical nanothermometer technique based on laser trapping, moving and targeted attaching an erbium oxide nanoparticle cluster is developed to measure the local temperature. The authors apply this new nanoscale temperature measuring technique (limited by the size of the nanoparticles) to measure the temperature of vapor nucleation in water. Vapor nucleation is observed after superheating water above the boiling point for degassed and nondegassed water. The average nucleation temperature for water without gas is 560 K but this temperature is lowered by 100 K when gas is introduced into the water. The authors are able to measure the temperature inside the bubble during bubble formation and find that the temperature inside the bubble spikes to over 1000 K because the heat source (optically-heated nanorods) is no longer connected to liquid water and heat dissipation is greatly reduced. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Recent advances in radiation cancer therapy

    NASA Astrophysics Data System (ADS)

    Ma, C.-M. Charlie

    2007-03-01

    This paper presents the recent advances in radiation therapy techniques for the treatment of cancer. Significant improvement has been made in imaging techniques such as CT, MRI, MRS, PET, ultrasound, etc. that have brought marked advances in tumor target and critical structure delineation for treatment planning and patient setup and target localization for accurate dose delivery in radiation therapy of cancer. Recent developments of novel treatment modalities including intensity-modulated x-ray therapy (IMXT), energy- and intensity modulated electron therapy (MERT) and intensity modulated proton therapy (IMPT) together with the use of advanced image guidance have enabled precise dose delivery for dose escalation and hypofractionation studies that may result in better local control and quality of life. Particle acceleration using laser-induced plasmas has great potential for new cost-effective radiation sources that may have a great impact on the management of cancer using radiation therapy.

  2. Islands Climatology at Local Scale. Downscaling with CIELO model

    NASA Astrophysics Data System (ADS)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores, which has been improved since its original version, constituting currently a downscaling tool widely applied with success in different islands of Macaronesia. Recently the CIELO model has been tested against data from the Eastern North Atlantic (ENA), Graciosa Island ARM facility programme (established and supported by the U.S. Department of Energy with the collaboration of the local government and the University of the Azores).

  3. Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska

    USGS Publications Warehouse

    Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.

    2016-01-01

    Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.

  4. Detection of small earthquakes with dense array data: example from the San Jacinto fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Meng, Haoran; Ben-Zion, Yehuda

    2018-01-01

    We present a technique to detect small earthquakes not included in standard catalogues using data from a dense seismic array. The technique is illustrated with continuous waveforms recorded in a test day by 1108 vertical geophones in a tight array on the San Jacinto fault zone. Waveforms are first stacked without time-shift in nine non-overlapping subarrays to increase the signal-to-noise ratio. The nine envelope functions of the stacked records are then multiplied with each other to suppress signals associated with sources affecting only some of the nine subarrays. Running a short-term moving average/long-term moving average (STA/LTA) detection algorithm on the product leads to 723 triggers in the test day. Using a local P-wave velocity model derived for the surface layer from Betsy gunshot data, 5 s long waveforms of all sensors around each STA/LTA trigger are beamformed for various incident directions. Of the 723 triggers, 220 are found to have localized energy sources and 103 of these are confirmed as earthquakes by verifying their observation at 4 or more stations of the regional seismic network. This demonstrates the general validity of the method and allows processing further the validated events using standard techniques. The number of validated events in the test day is >5 times larger than that in the standard catalogue. Using these events as templates can lead to additional detections of many more earthquakes.

  5. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  6. Microwave Dielectric Heating of Drops in Microfluidic Devices†

    PubMed Central

    Issadore, David; Humphry, Katherine J.; Brown, Keith A.; Sandberg, Lori; Weitz, David; Westervelt, Robert M.

    2010-01-01

    We present a technique to locally and rapidly heat water drops in microfluidic devices with microwave dielectric heating. Water absorbs microwave power more efficiently than polymers, glass, and oils due to its permanent molecular dipole moment that has a large dielectric loss at GHz frequencies. The relevant heat capacity of the system is a single thermally isolated picoliter drop of water and this enables very fast thermal cycling. We demonstrate microwave dielectric heating in a microfluidic device that integrates a flow-focusing drop maker, drop splitters, and metal electrodes to locally deliver microwave power from an inexpensive, commercially available 3.0 GHz source and amplifier. The temperature of the drops is measured by observing the temperature dependent fluorescence intensity of cadmium selenide nanocrystals suspended in the water drops. We demonstrate characteristic heating times as short as 15 ms to steady-state temperatures as large as 30°C above the base temperature of the microfluidic device. Many common biological and chemical applications require rapid and local control of temperature, such as PCR amplification of DNA, and can benefit from this new technique. PMID:19495453

  7. Ion heating and short wavelength fluctuations in a helicon plasma source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scime, E. E.; Carr, J. Jr.; Galante, M.

    2013-03-15

    For typical helicon source parameters, the driving antenna can couple to two plasma modes; the weakly damped 'helicon' wave, and the strongly damped, short wavelength, slow wave. Here, we present direct measurements, obtained with two different techniques, of few hundred kHz, short wavelength fluctuations that are parametrically driven by the primary antenna and localized to the edge of the plasma. The short wavelength fluctuations appear for plasma source parameters such that the driving frequency is approximately equal to the lower hybrid frequency. Measurements of the steady-state ion temperature and fluctuation amplitude radial profiles suggest that the anomalously high ion temperaturesmore » observed at the edge of helicon sources result from damping of the short wavelength fluctuations. Additional measurements of the time evolution of the ion temperature and fluctuation profiles in pulsed helicon source plasmas support the same conclusion.« less

  8. Detection and monitoring of pollutant sources with Lidar/Dial techniques

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Gelfusa, M.; Malizia, A.; Parracino, S.; Richetta, M.; De Leo, L.; Perrimezzi, C.; Bellecci, C.

    2015-11-01

    It's well known that air pollution due to anthropogenic sources can have adverse effects on humans and the ecosystem. Therefore, in the last years, surveying large regions of the atmosphere in an automatic way has become a strategic objective of various public health organizations for early detection of pollutant sources in urban and industrial areas. The Lidar and Dial techniques have become well established laser based methods for the remote sensing of the atmosphere. They are often implemented to probe almost any level of the atmosphere and to acquire information to validate theoretical models about different topics of atmospheric physics. They can also be used for environment surveying by monitoring particles, aerosols and molecules. The aim of the present work is to demonstrate the potential of these methods to detect pollutants emitted from local sources (such as particulate and/or chemical compounds) and to evaluate their concentration. This is exemplified with the help of experimental data acquired in an industrial area in the south of Italy by mean of experimental campaign by use of pollutants simulated source. For this purpose, two mobile systems Lidar and Dial have been developed by the authors. In this paper there will be presented the operating principles of the system and the results of the experimental campaign.

  9. Machine learning based Intelligent cognitive network using fog computing

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  10. Environmental monitoring using autonomous vehicles: a survey of recent searching techniques.

    PubMed

    Bayat, Behzad; Crasta, Naveena; Crespi, Alessandro; Pascoal, António M; Ijspeert, Auke

    2017-06-01

    Autonomous vehicles are becoming an essential tool in a wide range of environmental applications that include ambient data acquisition, remote sensing, and mapping of the spatial extent of pollutant spills. Among these applications, pollution source localization has drawn increasing interest due to its scientific and commercial interest and the emergence of a new breed of robotic vehicles capable of operating in harsh environments without human supervision. The aim is to find the location of a region that is the source of a given substance of interest (e.g. a chemical pollutant at sea or a gas leakage in air) using a group of cooperative autonomous vehicles. Motivated by fast paced advances in this challenging area, this paper surveys recent advances in searching techniques that are at the core of environmental monitoring strategies using autonomous vehicles. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Submicron x-ray diffraction and its applications to problems in materials and environmental science

    NASA Astrophysics Data System (ADS)

    Tamura, N.; Celestre, R. S.; MacDowell, A. A.; Padmore, H. A.; Spolenak, R.; Valek, B. C.; Meier Chang, N.; Manceau, A.; Patel, J. R.

    2002-03-01

    The availability of high brilliance third generation synchrotron sources together with progress in achromatic focusing optics allows us to add submicron spatial resolution to the conventional century-old x-ray diffraction technique. The new capabilities include the possibility to map in situ, grain orientations, crystalline phase distribution, and full strain/stress tensors at a very local level, by combining white and monochromatic x-ray microbeam diffraction. This is particularly relevant for high technology industry where the understanding of material properties at a microstructural level becomes increasingly important. After describing the latest advances in the submicron x-ray diffraction techniques at the Advanced Light Source, we will give some examples of its application in material science for the measurement of strain/stress in metallic thin films and interconnects. Its use in the field of environmental science will also be discussed.

  12. Pyroelectric Energy Scavenging Techniques for Self-Powered Nuclear Reactor Wireless Sensor Networks

    DOE PAGES

    Hunter, Scott Robert; Lavrik, Nickolay V; Datskos, Panos G; ...

    2014-11-01

    Recent advances in technologies for harvesting waste thermal energy from ambient environments present an opportunity to implement truly wireless sensor nodes in nuclear power plants. These sensors could continue to operate during extended station blackouts and during periods when operation of the plant s internal power distribution system has been disrupted. The energy required to power the wireless sensors must be generated using energy harvesting techniques from locally available energy sources, and the energy consumption within the sensor circuitry must therefore be low to minimize power and hence the size requirements of the energy harvester. Harvesting electrical energy from thermalmore » energy sources can be achieved using pyroelectric or thermoelectric conversion techniques. Recent modeling and experimental studies have shown that pyroelectric techniques can be cost competitive with thermoelectrics in self powered wireless sensor applications and, using new temperature cycling techniques, has the potential to be several times as efficient as thermoelectrics under comparable operating conditions. The development of a new thermal energy harvester concept, based on temperature cycled pyroelectric thermal-to-electrical energy conversion, is outlined. This paper outlines the modeling of cantilever and pyroelectric structures and single element devices that demonstrate the potential of this technology for the development of high efficiency thermal-to-electrical energy conversion devices.« less

  13. Pyroelectric Energy Scavenging Techniques for Self-Powered Nuclear Reactor Wireless Sensor Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Scott Robert; Lavrik, Nickolay V; Datskos, Panos G

    Recent advances in technologies for harvesting waste thermal energy from ambient environments present an opportunity to implement truly wireless sensor nodes in nuclear power plants. These sensors could continue to operate during extended station blackouts and during periods when operation of the plant s internal power distribution system has been disrupted. The energy required to power the wireless sensors must be generated using energy harvesting techniques from locally available energy sources, and the energy consumption within the sensor circuitry must therefore be low to minimize power and hence the size requirements of the energy harvester. Harvesting electrical energy from thermalmore » energy sources can be achieved using pyroelectric or thermoelectric conversion techniques. Recent modeling and experimental studies have shown that pyroelectric techniques can be cost competitive with thermoelectrics in self powered wireless sensor applications and, using new temperature cycling techniques, has the potential to be several times as efficient as thermoelectrics under comparable operating conditions. The development of a new thermal energy harvester concept, based on temperature cycled pyroelectric thermal-to-electrical energy conversion, is outlined. This paper outlines the modeling of cantilever and pyroelectric structures and single element devices that demonstrate the potential of this technology for the development of high efficiency thermal-to-electrical energy conversion devices.« less

  14. An Investigation of the Relationship Between fMRI and ERP Source Localized Measurements of Brain Activity during Face Processing

    PubMed Central

    Richards, Todd; Webb, Sara Jane; Murias, Michael; Merkle, Kristen; Kleinhans, Natalia M.; Johnson, L. Clark; Poliakov, Andrew; Aylward, Elizabeth; Dawson, Geraldine

    2013-01-01

    Brain activity patterns during face processing have been extensively explored with functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). ERP source localization adds a spatial dimension to the ERP time series recordings, which allows for a more direct comparison and integration with fMRI findings. The goals for this study were (1) to compare the spatial descriptions of neuronal activity during face processing obtained with fMRI and ERP source localization using low-resolution electro-magnetic tomography (LORETA), and (2) to use the combined information from source localization and fMRI to explore how the temporal sequence of brain activity during face processing is summarized in fMRI activation maps. fMRI and high-density ERP data were acquired in separate sessions for 17 healthy adult males for a face and object processing task. LORETA statistical maps for the comparison of viewing faces and viewing houses were coregistered and compared to fMRI statistical maps for the same conditions. The spatial locations of face processing-sensitive activity measured by fMRI and LORETA were found to overlap in a number of areas including the bilateral fusiform gyri, the right superior, middle and inferior temporal gyri, and the bilateral precuneus. Both the fMRI and LORETA solutions additionally demon-strated activity in regions that did not overlap. fMRI and LORETA statistical maps of face processing-sensitive brain activity were found to converge spatially primarily at LORETA solution latencies that were within 18 ms of the N170 latency. The combination of data from these techniques suggested that electrical brain activity at the latency of the N170 is highly represented in fMRI statistical maps. PMID:19322649

  15. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  16. Enhanced Analysis Techniques for an Imaging Neutron and Gamma Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Madden, Amanda C.

    The presence of gamma rays and neutrons is a strong indicator of the presence of Special Nuclear Material (SNM). The imaging Neutron and gamma ray SPECTrometer (NSPECT) developed by the University of New Hampshire and Michigan Aerospace corporation detects the fast neutrons and prompt gamma rays from fissile material, and the gamma rays from radioactive material. The instrument operates as a double scatter device, requiring a neutron or a gamma ray to interact twice in the instrument. While this detection requirement decreases the efficiency of the instrument, it offers superior background rejection and the ability to measure the energy and momentum of the incident particle. These measurements create energy spectra and images of the emitting source for source identification and localization. The dual species instrument provides superior detection than a single species alone. In realistic detection scenarios, few particles are detected from a potential threat due to source shielding, detection at a distance, high background, and weak sources. This contributes to a small signal to noise ratio, and threat detection becomes difficult. To address these difficulties, several enhanced data analysis tools were developed. A Receiver Operating Characteristic Curve (ROC) helps set instrumental alarm thresholds as well as to identify the presence of a source. Analysis of a dual-species ROC curve provides superior detection capabilities. Bayesian analysis helps to detect and identify the presence of a source through model comparisons, and helps create a background corrected count spectra for enhanced spectroscopy. Development of an instrument response using simulations and numerical analyses will help perform spectra and image deconvolution. This thesis will outline the principles of operation of the NSPECT instrument using the double scatter technology, traditional analysis techniques, and enhanced analysis techniques as applied to data from the NSPECT instrument, and an outline of how these techniques can be used to superior detection of radioactive and fissile materials.

  17. MTSAT: Full Disk - NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    GOES Himawari-8 Indian Ocean Meteosat HEMISPHERIC GOES Atlantic Source | Local GOES West Himawari-8 Meteosat CONTINENTAL PACUS CONUS Source | Local REGIONAL GOES-West Northwest West Central Southwest GOES -East Regional Page Source | Local Pacific Northwest Source | Local Northern Rockies Source | Local

  18. Methods and apparatus for producing cryogenic inertially driven fusion targets

    DOEpatents

    Miller, John R.

    1981-01-01

    A new technique for producing uniform layers of solid DT on microballoon surfaces. Local heating of the target, typically by means of a focused laser, within an isothermal freezing cell containing a low pressure cryogenic exchange gas such as helium, vaporizes the DT fuel. Removal of the laser heating source causes the DT gas to rapidly condense and freeze in a layer which exhibits a good degree of uniformity.

  19. Mobile indoor localization using Kalman filter and trilateration technique

    NASA Astrophysics Data System (ADS)

    Wahid, Abdul; Kim, Su Mi; Choi, Jaeho

    2015-12-01

    In this paper, an indoor localization method based on Kalman filtered RSSI is presented. The indoor communications environment however is rather harsh to the mobiles since there is a substantial number of objects distorting the RSSI signals; fading and interference are main sources of the distortion. In this paper, a Kalman filter is adopted to filter the RSSI signals and the trilateration method is applied to obtain the robust and accurate coordinates of the mobile station. From the indoor experiments using the WiFi stations, we have found that the proposed algorithm can provide a higher accuracy with relatively lower power consumption in comparison to a conventional method.

  20. Wedge MUSIC: a novel approach to examine experimental differences of brain source connectivity patterns from EEG/MEG data.

    PubMed

    Ewald, Arne; Avarvand, Forooz Shahbazi; Nolte, Guido

    2014-11-01

    We introduce a novel method to estimate bivariate synchronization, i.e. interacting brain sources at a specific frequency or band, from MEG or EEG data robust to artifacts of volume conduction. The data driven calculation is solely based on the imaginary part of the cross-spectrum as opposed to the imaginary part of coherency. In principle, the method quantifies how strong a synchronization between a distinct pair of brain sources is present in the data. As an input of the method all pairs of pre-defined locations inside the brain can be used which is computationally exhaustive. In contrast to that, reference sources can be used that have been identified by any source reconstruction technique in a prior analysis step. We introduce different variants of the method and evaluate the performance in simulations. As a particular advantage of the proposed methodology, we demonstrate that the novel approach is capable of investigating differences in brain source interactions between experimental conditions or with respect to a certain baseline. For measured data, we first show the application on resting state MEG data where we find locally synchronized sources in the motor-cortex based on the sensorimotor idle rhythms. Finally, we show an example on EEG motor imagery data where we contrast hand and foot movements. Here, we also find local interactions in the expected brain areas. Copyright © 2014. Published by Elsevier Inc.

  1. Reduction of Helicopter Blade-Vortex Interaction Noise by Active Rotor Control Technology

    NASA Technical Reports Server (NTRS)

    Yu, Yung H.; Gmelin, Bernd; Splettstoesser, Wolf; Brooks, Thomas F.; Philippe, Jean J.; Prieur, Jean

    1997-01-01

    Helicopter blade-vortex interaction noise is one of the most severe noise sources and is very important both in community annoyance and military detection. Research over the decades has substantially improved basic physical understanding of the mechanisms generating rotor blade-vortex interaction noise and also of controlling techniques, particularly using active rotor control technology. This paper reviews active rotor control techniques currently available for rotor blade vortex interaction noise reduction, including higher harmonic pitch control, individual blade control, and on-blade control technologies. Basic physical mechanisms of each active control technique are reviewed in terms of noise reduction mechanism and controlling aerodynamic or structural parameters of a blade. Active rotor control techniques using smart structures/materials are discussed, including distributed smart actuators to induce local torsional or flapping deformations, Published by Elsevier Science Ltd.

  2. Development of XAFS Into a Structure Determination Technique

    NASA Astrophysics Data System (ADS)

    Stern, E. A.

    After the detection of diffraction of x-rays by M. Laue in 1912, the technique was soon applied to structure determination by Bragg within a year. On the other hand, although the edge steps in X-Ray absorption were discovered even earlier by Barkla and both the near edge (XANES) and extended X-Ray fine structure (EXAFS) past the edge were detected by 1929, it still took over 40 years to realize the structure information contained in this X-Ray absorption fine structure (XAFS). To understand this delay a brief historical review of the development of the scientific ideas that transformed XAFS into the premiere technique for local structure determination is given. The development includes both advances in theoretical understanding and calculational capabilities, and in experimental facilities, especially synchrotron radiation sources. The present state of the XAFS technique and its capabilities are summarized.

  3. Weighted image de-fogging using luminance dark prior

    NASA Astrophysics Data System (ADS)

    Kansal, Isha; Kasana, Singara Singh

    2017-10-01

    In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.

  4. Advanced techniques and technology for efficient data storage, access, and transfer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Miller, Warner

    1991-01-01

    Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.

  5. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  6. Use of Provocative Angiography to Localize Site in Recurrent Gastrointestinal Bleeding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Ciaran, E-mail: ciaranjohnston@yahoo.co.uk; Tuite, David; Pritchard, Ruth

    2007-09-15

    Background. While the source of most cases of lower gastrointestinal bleeding may be diagnosed with modern radiological and endoscopic techniques, approximately 5% of patients remain who have negative endoscopic and radiological investigations.Clinical Problem. These patients require repeated hospital admissions and blood transfusions, and may proceed to exploratory laparotomy and intraoperative endoscopy. The personal and financial costs are significant. Method of Diagnosis and Decision Making. The technique of adding pharmacologic agents (anticoagulants, vasodilators, fibrinolytics) during standard angiographic protocols to induce a prohemorrhagic state is termed provocative angiography. It is best employed when significant bleeding would otherwise necessitate emergency surgery. Treatment. Thismore » practice frequently identifies a bleeding source (reported success rates range from 29 to 80%), which may then be treated at the same session. We report the case of a patient with chronic lower gastrointestinal hemorrhage with consistently negative endoscopic and radiological workup, who had an occult source of bleeding identified only after a provocative angiographic protocol was instituted, and who underwent succeeding therapeutic coil embolization of the bleeding vessel.« less

  7. An innovative technique to synthesize C-doped MgB2 by using chitosan as the carbon source

    NASA Astrophysics Data System (ADS)

    Bovone, G.; Vignolo, M.; Bernini, C.; Kawale, S.; Siri, A. S.

    2014-02-01

    Here, we report a new technique to synthesize carbon-doped MgB2 powder. Chitosan was innovatively used as the carbon source during the synthesis of boron from boron oxide. This allowed the introduction of local defects, which later on served as pinning centers in MgB2, in the boron lattice itself, avoiding the traditional and time consuming ways of ex situ MgB2 doping (e.g. ball milling). Two volume percentages of C-doping have been tried and its effect on the superconducting properties, evaluated by magnetic and transport measurements, are discussed here. Morphological analysis by scanning electron microscopy revealed nano-metric grains’ distribution in the boron and MgB2 powder. Mono-filamentary MgB2 wires have been fabricated by an ex situ powder-in-tube technique by using the thus prepared carbon-doped MgB2 and pure MgB2 powders. Transport property measurements on these wires were made and compared with MgB2 wire produced using commercial boron.

  8. Solution of the three-dimensional Helmholtz equation with nonlocal boundary conditions

    NASA Technical Reports Server (NTRS)

    Hodge, Steve L.; Zorumski, William E.; Watson, Willie R.

    1995-01-01

    The Helmholtz equation is solved within a three-dimensional rectangular duct with a nonlocal radiation boundary condition at the duct exit plane. This condition accurately models the acoustic admittance at an arbitrarily-located computational boundary plane. A linear system of equations is constructed with second-order central differences for the Helmholtz operator and second-order backward differences for both local admittance conditions and the gradient term in the nonlocal radiation boundary condition. The resulting matrix equation is large, sparse, and non-Hermitian. The size and structure of the matrix makes direct solution techniques impractical; as a result, a nonstationary iterative technique is used for its solution. The theory behind the nonstationary technique is reviewed, and numerical results are presented for radiation from both a point source and a planar acoustic source. The solutions with the nonlocal boundary conditions are invariant to the location of the computational boundary, and the same nonlocal conditions are valid for all solutions. The nonlocal conditions thus provide a means of minimizing the size of three-dimensional computational domains.

  9. Analysis of memory use for improved design and compile-time allocation of local memory

    NASA Technical Reports Server (NTRS)

    Mcniven, Geoffrey D.; Davidson, Edward S.

    1986-01-01

    Trace analysis techniques are used to study memory referencing behavior for the purpose of designing local memories and determining how to allocate them for data and instructions. In an attempt to assess the inherent behavior of the source code, the trace analysis system described here reduced the effects of the compiler and host architecture on the trace by using a technical called flattening. The variables in the trace, their associated single-assignment values, and references are histogrammed on the basis of various parameters describing memory referencing behavior. Bounds are developed specifying the amount of memory space required to store all live values in a particular histogram class. The reduction achieved in main memory traffic by allocating local memory is specified for each class.

  10. Las Vegas Basin Seismic Response Project: Measured Shallow Soil Velocities

    NASA Astrophysics Data System (ADS)

    Luke, B. A.; Louie, J.; Beeston, H. E.; Skidmore, V.; Concha, A.

    2002-12-01

    The Las Vegas valley in Nevada is a deep (up to 5 km) alluvial basin filled with interlayered gravels, sands, and clays. The climate is arid. The water table ranges from a few meters to many tens of meters deep. Laterally extensive thin carbonate-cemented lenses are commonly found across parts of the valley. Lenses range beyond 2 m in thickness, and occur at depths exceeding 200 m. Shallow seismic datasets have been collected at approximately ten sites around the Las Vegas valley, to characterize shear and compression wave velocities in the near surface. Purposes for the surveys include modeling of ground response to dynamic loads, both natural and manmade, quantification of soil stiffness to aid structural foundation design, and non-intrusive materials identification. Borehole-based measurement techniques used include downhole and crosshole, to depths exceeding 100 m. Surface-based techniques used include refraction and three different methods involving inversion of surface-wave dispersion datasets. This latter group includes two active-source techniques, the Spectral Analysis of Surface Waves (SASW) method and the Multi-Channel Analysis of Surface Waves (MASW) method; and a new passive-source technique, the Refraction Mictrotremor (ReMi) method. Depths to halfspace for the active-source measurements ranged beyond 50 m. The passive-source method constrains shear wave velocities to 100 m depths. As expected, the stiff cemented layers profoundly affect local velocity gradients. Scale effects are evident in comparisons of (1) very local measurements typified by borehole methods, to (2) the broader coverage of the SASW and MASW measurements, to (3) the still broader and deeper resolution made possible by the ReMi measurements. The cemented layers appear as sharp spikes in the downhole datasets and are problematic in crosshole measurements due to refraction. The refraction method is useful only to locate the depth to the uppermost cemented layer. The surface-wave methods, on the other hand, can process velocity inversions. With the broader coverage of the active-source surface wave measurements, through careful inversion that takes advantage of prior information to the greatest extent possible, multiple, shallow, stiff layers can be resolved. Data from such broader-coverage methods also provide confidence regarding continuity of the cemented layers. For the ReMi measurements, which provide the broadest coverage of all methods used, the more generalized shallow profile is sometimes characterized by a strong stiffness inversion at a depth of approximately 10 m. We anticipate that this impedance contrast represents the vertical extent of the multiple layered deposits of cemented media.

  11. Workshop on Measurement Needs for Local-Structure Determination in Inorganic Materials

    PubMed Central

    Levin, Igor; Vanderah, Terrell

    2008-01-01

    The functional responses (e.g., dielectric, magnetic, catalytic, etc.) of many industrially-relevant materials are controlled by their local structure—a term that refers to the atomic arrangements on a scale ranging from atomic (sub-nanometer) to several nanometers. Thus, accurate knowledge of local structure is central to understanding the properties of nanostructured materials, thereby placing the problem of determining atomic positions on the nanoscale—the so-called “nanostructure problem”—at the center of modern materials development. Today, multiple experimental techniques exist for probing local atomic arrangements; nonetheless, finding accurate comprehensive, and robust structural solutions for the nanostructured materials still remains a formidable challenge because any one of these methods yields only a partial view of the local structure. The primary goal of this 2-day NIST-sponsored workshop was to bring together experts in the key experimental and theoretical areas relevant to local-structure determination to devise a strategy for the collaborative effort required to develop a comprehensive measurement solution on the local scale. The participants unanimously agreed that solving the nanostructure problem—an ultimate frontier in materials characterization—necessitates a coordinated interdisciplinary effort that transcends the existing capabilities of any single institution, including national laboratories, centers, and user facilities. The discussions converged on an institute dedicated to local structure determination as the most viable organizational platform for successfully addressing the nanostructure problem. The proposed “institute” would provide an intellectual infrastructure for local structure determination by (1) developing and maintaining relevant computer software integrated in an open-source global optimization framework (Fig. 2), (2) connecting industrial and academic users with experts in measurement techniques, (3) developing and maintaining pertinent databases, and (4) providing necessary education and training. PMID:27096131

  12. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  13. Impedance measurement of non-locally reactive samples and the influence of the assumption of local reaction.

    PubMed

    Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R

    2013-05-01

    In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.

  14. Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato

    2017-04-01

    Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.

  15. Multiscale Methods for Nuclear Reactor Analysis

    NASA Astrophysics Data System (ADS)

    Collins, Benjamin S.

    The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly interface, the fuel/reflector interface, and assemblies where control rods are inserted. The embedded method also allows for multiple solution levels to be applied in a single calculation. The addition of intermediate levels to the solution improves the accuracy of the method. Both multiscale methods considered here have benefits and drawbacks, but both can provide improvements over the current PPR methodology.

  16. Accounting for Limited Detection Efficiency and Localization Precision in Cluster Analysis in Single Molecule Localization Microscopy

    PubMed Central

    Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra

    2015-01-01

    Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150

  17. Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors

    NASA Astrophysics Data System (ADS)

    Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz

    2016-04-01

    The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue, therefore providing an evaluation of network performance and efficiency of the method.

  18. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  19. Apparatus for producing cryogenic inertially driven fusion targets

    DOEpatents

    Miller, John R.

    1981-01-01

    A new technique for producing uniform layers of solid DT on microballoon surfaces. Local heating of the target, typically by means of a focused laser, within an isothermal freezing cell containing a low pressure cryogenic exchange gas such as helium, vaporizes the DT fuel contained within the microballoon. Removal of the laser heating source causes the DT gas to rapidly condense and freeze in a layer which exhibits a good degree of uniformity.

  20. SIRU utilization. Volume 1: Theory, development and test evaluation

    NASA Technical Reports Server (NTRS)

    Musoff, H.

    1974-01-01

    The theory, development, and test evaluations of the Strapdown Inertial Reference Unit (SIRU) are discussed. The statistical failure detection and isolation, single position calibration, and self alignment techniques are emphasized. Circuit diagrams of the system components are provided. Mathematical models are developed to show the performance characteristics of the subsystems. Specific areas of the utilization program are identified as: (1) error source propagation characteristics and (2) local level navigation performance demonstrations.

  1. Nexus of Crime and Terrorism: The Case of the Abu Sayyaf Group

    DTIC Science & Technology

    2016-12-01

    money laundering , counterfeiting, or bomb-making techniques.”261 Also, these alliances can occur to get “operational support” such as access to...local and foreign terrorist groups. Rommel Banlaoi asserts that in 2001, the Philippine congress approved the Anti- Money Laundering Act as one of...Terror and Responses Any threat or nefarious organization around the world will not survive without money or sources of financing. Jennifer Hesterman

  2. Data in support of the detection of genetically modified organisms (GMOs) in food and feed samples.

    PubMed

    Alasaad, Noor; Alzubi, Hussein; Kader, Ahmad Abdul

    2016-06-01

    Food and feed samples were randomly collected from different sources, including local and imported materials from the Syrian local market. These included maize, barley, soybean, fresh food samples and raw material. GMO detection was conducted by PCR and nested PCR-based techniques using specific primers for the most used foreign DNA commonly used in genetic transformation procedures, i.e., 35S promoter, T-nos, epsps, cryIA(b) gene and nptII gene. The results revealed for the first time in Syria the presence of GM foods and feeds with glyphosate-resistant trait of P35S promoter and NOS terminator in the imported soybean samples with high frequency (5 out of the 6 imported soybean samples). While, tests showed negative results for the local samples. Also, tests revealed existence of GMOs in two imported maize samples detecting the presence of 35S promoter and nos terminator. Nested PCR results using two sets of primers confirmed our data. The methods applied in the brief data are based on DNA analysis by Polymerase Chain Reaction (PCR). This technique is specific, practical, reproducible and sensitive enough to detect up to 0.1% GMO in food and/or feedstuffs. Furthermore, all of the techniques mentioned are economic and can be applied in Syria and other developing countries. For all these reasons, the DNA-based analysis methods were chosen and preferred over protein-based analysis.

  3. Near surface characterisation with passive seismic data - a case study from the La Barge basin (Wyoming)

    NASA Astrophysics Data System (ADS)

    Behm, M.; Snieder, R.; Tomic, J.

    2012-12-01

    In regions where active source seismic data are inadequate for imaging purposes due to energy penetration and recovery, cost and logistical concerns, or regulatory restrictions, analysis of natural source and ambient seismic data may provide an alternative. In this study, we investigate the feasibility of using locally-generated seismic noise and teleseismic events in the 2-10 Hz band to obtain a subsurface model. We apply different techniques to 3-component data recorded during the LaBarge Passive Seismic Experiment, a local deployment in southwestern Wyoming in a producing hydrocarbon basin. Fifty-five broadband instruments with an inter-station distance of 250 m recorded continuous seismic data between November 2008 and June 2009. The consistency and high quality of the data set make it an ideal test ground to determine the value of passive seismology techniques for exploration purposes. The near surface is targeted by interferometric analysis of ambient noise. Our results indicate that traffic noise from a state highway generates coherent Rayleigh and Love waves that can then be inverted for laterally varying velocities. The results correlate well with surface geology, and are thought to represent the average of the few upper hundred meters. The autocorrelation functions (ACF) of teleseismic body waves provide information on the uppermost part (1 to 5 km depth) of the crust. ACFs from P-waves correlate with the shallow structure as known from active source studies. The analysis of S-waves exhibits a pronounced azimuthal dependency, which might be used to gain insights on anisotropy.

  4. Monitoring fossil fuel sources of methane in Australia

    NASA Astrophysics Data System (ADS)

    Loh, Zoe; Etheridge, David; Luhar, Ashok; Hibberd, Mark; Thatcher, Marcus; Noonan, Julie; Thornton, David; Spencer, Darren; Gregory, Rebecca; Jenkins, Charles; Zegelin, Steve; Leuning, Ray; Day, Stuart; Barrett, Damian

    2017-04-01

    CSIRO has been active in identifying and quantifying methane emissions from a range of fossil fuel sources in Australia over the past decade. We present here a history of the development of our work in this domain. While we have principally focused on optimising the use of long term, fixed location, high precision monitoring, paired with both forward and inverse modelling techniques suitable either local or regional scales, we have also incorporated mobile ground surveys and flux calculations from plumes in some contexts. We initially developed leak detection methodologies for geological carbon storage at a local scale using a Bayesian probabilistic approach coupled to a backward Lagrangian particle dispersion model (Luhar et al. JGR, 2014), and single point monitoring with sector analysis (Etheridge et al. In prep.) We have since expanded our modelling techniques to regional scales using both forward and inverse approaches to constrain methane emissions from coal mining and coal seam gas (CSG) production. The Surat Basin (Queensland, Australia) is a region of rapidly expanding CSG production, in which we have established a pair of carefully located, well-intercalibrated monitoring stations. These data sets provide an almost continuous record of (i) background air arriving at the Surat Basin, and (ii) the signal resulting from methane emissions within the Basin, i.e. total downwind methane concentration (comprising emissions including natural geological seeps, agricultural and biogenic sources and fugitive emissions from CSG production) minus background or upwind concentration. We will present our latest results on monitoring from the Surat Basin and their application to estimating methane emissions.

  5. Diffusion of surgical techniques in early stage breast cancer: variables related to adoption and implementation of sentinel lymph node biopsy.

    PubMed

    Vanderveen, Kimberly A; Paterniti, Debora A; Kravitz, Richard L; Bold, Richard J

    2007-05-01

    Understanding how physicians acquire and adopt new technologies for cancer diagnosis and treatment is poorly understood, yet is critical to the dissemination of evidence-based practices. Sentinel lymph node biopsy (SLNB) has recently become a standard technique for axillary staging in early breast cancer and is an ideal platform for studying medical technology diffusion. We sought to describe the timing of SLNB adoption and patterns of surgeon interactions with the following educational sources: local university training program, surgical literature, national meetings/courses, national specialty centers, and other local surgeons. A cross-sectional survey that used semistructured interviews was used to assess timing of adoption, practice patterns, and learning sources for SLNB among surgical oncologists and general surgeons in a single metropolitan area. A total of 44 eligible surgeons were identified; 38 (86%) participated. All surgical oncologists (11 of 11) and most general surgeons (26 of 27) had implemented SLNB. Surgical oncologists were older (mean 51 vs. 48 years, P = .02) and had used SLNB longer (6.1 vs. 3.3 years, P = .01) than general surgeons. By use of social network diagrams, surgical oncologists and the university training program were shown to be key intermediaries between general surgeons and national specialty centers. Surgeons in group practice tended to use more learning sources than solo practitioners. Surgical oncologists and university-based surgeons play key educational roles in disseminating new cancer treatments and therefore have a professional responsibility to educate other community physicians to increase the use of the most current, evidence-based practices.

  6. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  7. Neutrino Astronomy at the South Pole: latest Results from AMANDA-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desiati, Paolo

    2006-07-11

    AMANDA-II is the largest neutrino telescope collecting data at the moment, and its main goal is to search for sources of high energy extra-terrestrial neutrinos. The detection of such sources could give non-controversial evidence for the acceleration of charged hadrons in cosmic objects like Supernova Remnants, Micro-quasars, Active Galactic Nuclei or Gamma Ray Bursts. No significant excess has been found in searching for neutrinos from both point-like and non-localized sources. However AMANDA-II has significantly improved analysis techniques for better signal-to-noise optimization. The km3-scale IceCube telescope will enlarge the observable energy range and improve the sensitivities of high energy neutrino searchesmore » due to its 30 times larger effective area.« less

  8. Monochromatic body waves excited by great subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Ihmlé, Pierre F.; Madariaga, Raúl

    Large quasi-monochromatic body waves were excited by the 1995 Chile Mw=8.1 and by the 1994 Kurile Mw=8.3 events. They are observed on vertical/radial component seismograms following the direct P and Pdiff arrivals, at all azimuths. We devise a slant stack algorithm to characterize the source of the oscillations. This technique aims at locating near-source isotropic scatterers using broadband data from global networks. For both events, we find that the oscillations emanate from the trench. We show that these monochromatic waves are due to localized oscillations of the water column. Their period corresponds to the gravest ID mode of a water layer for vertically traveling compressional waves. We suggest that these monochromatic body waves may yield additional constraints on the source process of great subduction zone earthquakes.

  9. Quantitative body fluid proteomics in medicine - A focus on minimal invasiveness.

    PubMed

    Csősz, Éva; Kalló, Gergő; Márkus, Bernadett; Deák, Eszter; Csutak, Adrienne; Tőzsér, József

    2017-02-05

    Identification of new biomarkers specific for various pathological conditions is an important field in medical sciences. Body fluids have emerging potential in biomarker studies especially those which are continuously available and can be collected by non-invasive means. Changes in the protein composition of body fluids such as tears, saliva, sweat, etc. may provide information on both local and systemic conditions of medical relevance. In this review, our aim is to discuss the quantitative proteomics techniques used in biomarker studies, and to present advances in quantitative body fluid proteomics of non-invasively collectable body fluids with relevance to biomarker identification. The advantages and limitations of the widely used quantitative proteomics techniques are also presented. Based on the reviewed literature, we suggest an ideal pipeline for body fluid analyses aiming at biomarkers discoveries: starting from identification of biomarker candidates by shotgun quantitative proteomics or protein arrays, through verification of potential biomarkers by targeted mass spectrometry, to the antibody-based validation of biomarkers. The importance of body fluids as a rich source of biomarkers is discussed. Quantitative proteomics is a challenging part of proteomics applications. The body fluids collected by non-invasive means have high relevance in medicine; they are good sources for biomarkers used in establishing the diagnosis, follow up of disease progression and predicting high risk groups. The review presents the most widely used quantitative proteomics techniques in body fluid analysis and lists the potential biomarkers identified in tears, saliva, sweat, nasal mucus and urine for local and systemic diseases. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  11. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  12. Optical detection of gold nanoparticles in a prostate-shaped porcine phantom.

    PubMed

    Grabtchak, Serge; Tonkopi, Elena; Whelan, William M

    2013-07-01

    Gold nanoparticles can be used as molecular contrast agents binding specifically to cancer sites and thus delineating tumor regions. Imaging gold nanoparticles deeply embedded in tissues with optical techniques possesses significant challenges due to multiple scattering of optical photons that blur the obtained images. Both diagnostic and therapeutic applications can benefit from a minimally invasive technique that can identify, localize, and quantify the payloads of gold nanoparticles deeply embedded in biological tissues. An optical radiance technique is applied to map localized inclusions of gold nanorods in 650- to 900-nm spectral range in a porcine phantom that mimics prostate geometry. Optical radiance defines a variation in the angular density of photons impinging on a selected point in the tissue from various directions. The inclusions are formed by immersing a capillary filled with gold nanorods in the phantom at increasing distances from the detecting fiber. The technique allows the isolation of the spectroscopic signatures of the inclusions from the background and identification of inclusion locations in the angular domain. Detection of ∼4×1010 gold nanoparticles or 0.04  mg Au/mL (detector-inclusion separation 10 mm, source-detector separation 15 mm) in the porcine tissue is demonstrated. The encouraging results indicate a promising potential of radiance spectroscopy in early prostate cancer diagnostics with gold nanoparticles.

  13. The Application of Coherent Local Time for Optical Time Transfer and the Quantification of Systematic Errors in Satellite Laser Ranging

    NASA Astrophysics Data System (ADS)

    Schreiber, K. Ulrich; Kodet, Jan

    2018-02-01

    Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.

  14. Tropospheric ozone using an emission tagging technique in the CAM-Chem and WRF-Chem models

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Coates, J.; Zhu, S.; Butler, T. M.

    2017-12-01

    Tropospheric ozone is a short-lived climate forcing pollutant. High concentration of ozone can affect human health (cardiorespiratory and increased mortality due to long-term exposure), and also it damages crops. Attributing ozone concentrations to the contributions from different sources would indicate the effects of locally emitted or transported precursors on ozone levels in specific regions. This information could be used as an important component of the design of emissions reduction strategies by indicating which emission sources could be targeted for effective reductions, thus reducing the burden of ozone pollution. Using a "tagging" approach within the CAM-Chem (global) and WRF-Chem (regional) models, we can quantify the contribution of individual emission of NOx and VOC precursors on air quality. Hence, when precursor emissions of NOx are tagged, we have seen that the largest contributors on ozone levels are the anthropogenic sources, while in the case of precursor emissions of VOCs, the biogenic sources and methane account for more than 50% of ozone levels. Further, we have extended the NOx tagging method in order to investigate continental source region contributions to concentrations of ozone over various receptor regions over the globe, with a zoom over Europe. In general, summertime maximum ozone in most receptor regions is largely attributable to local emissions of anthropogenic NOx and biogenic VOC. During the rest of the year, especially during springtime, ozone in most receptor regions shows stronger influences from anthropogenic emissions of NOx and VOC in remote source regions.

  15. PIGC™ - A low cost fugitive emissions and methane detection system using advanced gas filter correlation techniques for local and wide area monitoring

    NASA Astrophysics Data System (ADS)

    Lachance, R. L.; Gordley, L. L.; Marshall, B. T.; Fisher, J.; Paxton, G.; Gubeli, J. F.

    2015-12-01

    Currently there is no efficient and affordable way to monitor gas releases over small to large areas. We have demonstrated the ability to accurately measure key greenhouse and pollutant gasses with low cost solar observations using the breakthrough sensor technology called the "Pupil Imaging Gas Correlation", PIGC™, which provides size and complexity reduction while providing exceptional resolution and coverage for various gas sensing applications. It is a practical implementation of the well-known Gas Filter Correlation Radiometry (GFCR) technique used for the HALOE and MOPITT satellite instruments that were flown on successful NASA missions in the early 2000s. This strong space heritage brings performance and reliability to the ground instrument design. A methane (CH4) abundance sensitivity of 0.5% or better of ambient column with uncooled microbolometers has been demonstrated with 1 second direct solar observations. These under $10 k sensors can be deployed in precisely balanced autonomous grids to monitor the flow of chosen gasses, and infer their source locations. Measureable gases include CH4, 13CO2, N2O, NO, NH3, CO, H2S, HCN, HCl, HF, HDO and others. A single instrument operates in a dual operation mode, at no additional cost, for continuous (real-time 24/7) local area perimeter monitoring for the detection of leaks for safety & security needs, looking at an artificial light source (for example a simple 60 W light bulb placed 100 m away), while simultaneously allowing solar observation for quasi-continuous wide area total atmospheric column scanning (3-D) for environmental monitoring (fixed and mobile configurations). The second mode of operation continuously quantifies the concentration and flux of specific gases over different ground locations, determined the amount of targeted gas being released from the area or getting into the area from outside locations, allowing better tracking of plumes and identification of sources. This paper reviews the measurement technique, performance demonstration and grid deployment strategy.

  16. PREFACE: REXS 2013 - Workshop on Resonant Elastic X-ray Scattering in Condensed Matter

    NASA Astrophysics Data System (ADS)

    Beutier, G.; Mazzoli, C.; Yakhou, F.; Brown, S. D.; Bombardi, A.; Collins, S. P.

    2014-05-01

    The aim of this workshop was to bring together experts in experimental and theoretical aspects of resonant elastic x-ray scattering, along with researchers who are new to the field, to discuss important recent results and the fundamentals of the technique. The meeting was a great success, with the first day dedicated to students and new researchers in the field, who received introductory lectures and tutorials. All conference delegates were invited either to make an oral presentation or to present a poster, accompanied by a short talk. The first two papers selected for the REXS13 proceedings (Grenier & Joly and Helliwell) give a basic background to the theory of REXS and applications across a wide range of scientific areas. The remainder of the papers report on some of the latest scientific results obtained by applying the REXS technique to contemporary problems in condensed matter, materials and x-ray physics. It is hoped that these proceedings provide a snapshot of the current status of a vibrant and diverse scientific technique that will be of value not just to those who attended the workshop but also to any other reader with an interest in the subject. Local Scientific Committee REXS13 International Scientific Advisory Committee M Altarelli, European XFEL, Germany F de Bergevin, European Synchrotron Radiation Facility, France J Garcia-Ruiz, Universidad de Zaragoza, Spain A I Goldman, Iowa State University, USA M Goldmann, Institut Nanosciences, France T Schulli, European Synchrotron Radiation Facility, France C R Natoli, Laboratori Nazionali de Frascati, Italy G Materlik, Diamond Light Source, UK L Paolasini, European Synchrotron Radiation Facility, France U Staub, Paul Scherrer Institut, Switzerland K Finkelstein, Cornell University, USA Y Murakami, Photon Factory, Japan REXS13 Local Scientific Committee G Beutier, CNRS Grenoble, France C Mazzoli, Politecnico di Milano, Italy F Yakhou, European Synchrotron Radiation Facility, France S D Brown, XMaS UK CRG, France A Bombardi, Diamond Light Source, UK S P Collins, Diamond Light Source, UK http://www.rexs2013.org/

  17. Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.

    PubMed

    Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G

    2014-12-01

    PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.

  18. SQUID (superconducting quantum interference device) arrays for simultaneous magnetic measurements: Calibration and source localization performance

    NASA Astrophysics Data System (ADS)

    Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.

    1988-02-01

    Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.

  19. A new statistical tool for NOAA local climate studies

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Meyers, J. C.; Hollingshead, A.

    2011-12-01

    The National Weather Services (NWS) Local Climate Analysis Tool (LCAT) is evolving out of a need to support and enhance the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) field offices' ability to efficiently access, manipulate, and interpret local climate data and characterize climate variability and change impacts. LCAT will enable NOAA's staff to conduct regional and local climate studies using state-of-the-art station and reanalysis gridded data and various statistical techniques for climate analysis. The analysis results will be used for climate services to guide local decision makers in weather and climate sensitive actions and to deliver information to the general public. LCAT will augment current climate reference materials with information pertinent to the local and regional levels as they apply to diverse variables appropriate to each locality. The LCAT main emphasis is to enable studies of extreme meteorological and hydrological events such as tornadoes, flood, drought, severe storms, etc. LCAT will close a very critical gap in NWS local climate services because it will allow addressing climate variables beyond average temperature and total precipitation. NWS external partners and government agencies will benefit from the LCAT outputs that could be easily incorporated into their own analysis and/or delivery systems. Presently we identified five existing requirements for local climate: (1) Local impacts of climate change; (2) Local impacts of climate variability; (3) Drought studies; (4) Attribution of severe meteorological and hydrological events; and (5) Climate studies for water resources. The methodologies for the first three requirements will be included in the LCAT first phase implementation. Local rate of climate change is defined as a slope of the mean trend estimated from the ensemble of three trend techniques: (1) hinge, (2) Optimal Climate Normals (running mean for optimal time periods), (3) exponentially-weighted moving average. Root mean squared error is used to determine the best fit of trend to the observations with the least error. The studies of climate variability impacts on local extremes use composite techniques applied to various definitions of local variables: from specified percentiles to critical thresholds. Drought studies combine visual capabilities of Google maps with statistical estimates of drought severity indices. The process of development will be linked to local office interactions with users to ensure the tool will meet their needs as well as provide adequate training. A rigorous internal and tiered peer-review process will be implemented to ensure the studies are scientifically-sound that will be published and submitted to the local studies catalog (database) and eventually to external sources, such as the Climate Portal.

  20. As above, so below? Towards understanding inverse models in BCI

    NASA Astrophysics Data System (ADS)

    Lindgren, Jussi T.

    2018-02-01

    Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.

  1. Recent Advances in Active Infrared Thermography for Non-Destructive Testing of Aerospace Components.

    PubMed

    Ciampa, Francesco; Mahmoodi, Pooya; Pinto, Fulvio; Meo, Michele

    2018-02-16

    Active infrared thermography is a fast and accurate non-destructive evaluation technique that is of particular relevance to the aerospace industry for the inspection of aircraft and helicopters' primary and secondary structures, aero-engine parts, spacecraft components and its subsystems. This review provides an exhaustive summary of most recent active thermographic methods used for aerospace applications according to their physical principle and thermal excitation sources. Besides traditional optically stimulated thermography, which uses external optical radiation such as flashes, heaters and laser systems, novel hybrid thermographic techniques are also investigated. These include ultrasonic stimulated thermography, which uses ultrasonic waves and the local damage resonance effect to enhance the reliability and sensitivity to micro-cracks, eddy current stimulated thermography, which uses cost-effective eddy current excitation to generate induction heating, and microwave thermography, which uses electromagnetic radiation at the microwave frequency bands to provide rapid detection of cracks and delamination. All these techniques are here analysed and numerous examples are provided for different damage scenarios and aerospace components in order to identify the strength and limitations of each thermographic technique. Moreover, alternative strategies to current external thermal excitation sources, here named as material-based thermography methods, are examined in this paper. These novel thermographic techniques rely on thermoresistive internal heating and offer a fast, low power, accurate and reliable assessment of damage in aerospace composites.

  2. Recent Advances in Active Infrared Thermography for Non-Destructive Testing of Aerospace Components

    PubMed Central

    Mahmoodi, Pooya; Pinto, Fulvio; Meo, Michele

    2018-01-01

    Active infrared thermography is a fast and accurate non-destructive evaluation technique that is of particular relevance to the aerospace industry for the inspection of aircraft and helicopters’ primary and secondary structures, aero-engine parts, spacecraft components and its subsystems. This review provides an exhaustive summary of most recent active thermographic methods used for aerospace applications according to their physical principle and thermal excitation sources. Besides traditional optically stimulated thermography, which uses external optical radiation such as flashes, heaters and laser systems, novel hybrid thermographic techniques are also investigated. These include ultrasonic stimulated thermography, which uses ultrasonic waves and the local damage resonance effect to enhance the reliability and sensitivity to micro-cracks, eddy current stimulated thermography, which uses cost-effective eddy current excitation to generate induction heating, and microwave thermography, which uses electromagnetic radiation at the microwave frequency bands to provide rapid detection of cracks and delamination. All these techniques are here analysed and numerous examples are provided for different damage scenarios and aerospace components in order to identify the strength and limitations of each thermographic technique. Moreover, alternative strategies to current external thermal excitation sources, here named as material-based thermography methods, are examined in this paper. These novel thermographic techniques rely on thermoresistive internal heating and offer a fast, low power, accurate and reliable assessment of damage in aerospace composites. PMID:29462953

  3. Evolution of consumer information preferences with market maturity in solar PV adoption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeves, D. Cale; Rai, Varun; Margolis, Robert

    Residential adoption of solar photovoltaics (PV) is spreading rapidly, supported by policy initiatives at the federal, state, and local levels. Potential adopters navigate increasingly complex decision-making landscapes in their path to adoption. Much is known about the individual-level drivers of solar PV diffusion that steer adopters through this process, but relatively little is known about the evolution of these drivers as solar PV markets mature. By understanding the evolution of emerging solar PV markets over time, stakeholders in the diffusion of solar PV can increase policy effectiveness and reduce costs. This analysis uses survey data to compare two adjacent marketsmore » across a range of relevant characteristics, then models changes in the importance of local vs cosmopolitan information sources by combining theory relating market maturity to adopter behavior with event-history techniques. In younger markets, earlier, innovative adoptions that are tied to a preference for cosmopolitan information sources are more prevalent than expected, suggesting a frustrated demand for solar PV that segues into adoptions fueled by local information preferences contemporary with similar adoptions in older markets. Furthermore, the analysis concludes with policy recommendations to leverage changing consumer information preferences as markets mature.« less

  4. Evolution of consumer information preferences with market maturity in solar PV adoption

    DOE PAGES

    Reeves, D. Cale; Rai, Varun; Margolis, Robert

    2017-07-04

    Residential adoption of solar photovoltaics (PV) is spreading rapidly, supported by policy initiatives at the federal, state, and local levels. Potential adopters navigate increasingly complex decision-making landscapes in their path to adoption. Much is known about the individual-level drivers of solar PV diffusion that steer adopters through this process, but relatively little is known about the evolution of these drivers as solar PV markets mature. By understanding the evolution of emerging solar PV markets over time, stakeholders in the diffusion of solar PV can increase policy effectiveness and reduce costs. This analysis uses survey data to compare two adjacent marketsmore » across a range of relevant characteristics, then models changes in the importance of local vs cosmopolitan information sources by combining theory relating market maturity to adopter behavior with event-history techniques. In younger markets, earlier, innovative adoptions that are tied to a preference for cosmopolitan information sources are more prevalent than expected, suggesting a frustrated demand for solar PV that segues into adoptions fueled by local information preferences contemporary with similar adoptions in older markets. Furthermore, the analysis concludes with policy recommendations to leverage changing consumer information preferences as markets mature.« less

  5. Localization of marine mammals near Hawaii using an acoustic propagation model

    NASA Astrophysics Data System (ADS)

    Tiemann, Christopher O.; Porter, Michael B.; Frazer, L. Neil

    2004-06-01

    Humpback whale songs were recorded on six widely spaced receivers of the Pacific Missile Range Facility (PMRF) hydrophone network near Hawaii during March of 2001. These recordings were used to test a new approach to localizing the whales that exploits the time-difference of arrival (time lag) of their calls as measured between receiver pairs in the PMRF network. The usual technique for estimating source position uses the intersection of hyperbolic curves of constant time lag, but a drawback of this approach is its assumption of a constant wave speed and straight-line propagation to associate acoustic travel time with range. In contrast to hyperbolic fixing, the algorithm described here uses an acoustic propagation model to account for waveguide and multipath effects when estimating travel time from hypothesized source positions. A comparison between predicted and measured time lags forms an ambiguity surface, or visual representation of the most probable whale position in a horizontal plane around the array. This is an important benefit because it allows for automated peak extraction to provide a location estimate. Examples of whale localizations using real and simulated data in algorithms of increasing complexity are provided.

  6. Evolution of consumer information preferences with market maturity in solar PV adoption

    NASA Astrophysics Data System (ADS)

    Cale Reeves, D.; Rai, Varun; Margolis, Robert

    2017-07-01

    Residential adoption of solar photovoltaics (PV) is spreading rapidly, supported by policy initiatives at the federal, state, and local levels. Potential adopters navigate increasingly complex decision-making landscapes in their path to adoption. Much is known about the individual-level drivers of solar PV diffusion that steer adopters through this process, but relatively little is known about the evolution of these drivers as solar PV markets mature. By understanding the evolution of emerging solar PV markets over time, stakeholders in the diffusion of solar PV can increase policy effectiveness and reduce costs. This analysis uses survey data to compare two adjacent markets across a range of relevant characteristics, then models changes in the importance of local vs cosmopolitan information sources by combining theory relating market maturity to adopter behavior with event-history techniques. In younger markets, earlier, innovative adoptions that are tied to a preference for cosmopolitan information sources are more prevalent than expected, suggesting a frustrated demand for solar PV that segues into adoptions fueled by local information preferences contemporary with similar adoptions in older markets. The analysis concludes with policy recommendations to leverage changing consumer information preferences as markets mature.

  7. Development of an Acoustic Localization Method for Cavitation Experiments in Reverberant Environments

    NASA Astrophysics Data System (ADS)

    Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian

    2011-11-01

    Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).

  8. Characterization of Localized Filament Corrosion Products at the Anodic Head on a Model Mg-Zn-Zr Alloy Surface

    DOE PAGES

    Rossouw, David; Fu, Dong; Leonard, Donovan N.; ...

    2017-02-15

    In this study, localized filament corrosion products at the anodic head on a model Mg-1%Zn-0.4%Zr alloy surface were characterized by electron microscopy techniques of site-specific lamella prepared by focused ion beam milling. It is revealed that the anodic head propagates underneath a largely intact thin and dense MgO surface film and comprises dense aggregates of nano-crystalline MgO within a nano-porous Mg(OH) 2 network. In conclusion, the findings contribute new supportive direct imaging insight into the source of the enhanced H 2 evolution that accompanies anodic dissolution of Mg and its alloys.

  9. Characterization of Localized Filament Corrosion Products at the Anodic Head on a Model Mg-Zn-Zr Alloy Surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossouw, David; Fu, Dong; Leonard, Donovan N.

    In this study, localized filament corrosion products at the anodic head on a model Mg-1%Zn-0.4%Zr alloy surface were characterized by electron microscopy techniques of site-specific lamella prepared by focused ion beam milling. It is revealed that the anodic head propagates underneath a largely intact thin and dense MgO surface film and comprises dense aggregates of nano-crystalline MgO within a nano-porous Mg(OH) 2 network. In conclusion, the findings contribute new supportive direct imaging insight into the source of the enhanced H 2 evolution that accompanies anodic dissolution of Mg and its alloys.

  10. A Theoretical and Experimental Study of Acoustic Propagation in Multisectioned Circular Ducts. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wyerman, B. R.

    1976-01-01

    The propagation of plane waves and higher order acoustic modes in a circular multisectioned duct was studied. A unique source array consisting of two concentric rings of sources, providing phase and amplitude control in the radial, as well as circumferential direction, was developed to generate plane waves and both spinning and nonspinning higher order modes. Measurements of attenuation and radial mode shapes were taken with finite length liners inserted between the hard wall sections of an anechoically terminated duct. Materials tested as liners included a glass fiber material and both sintered fiber metals and perforated sheet metals with a honeycomb backing. The fundamental acoustic properties of these materials were studied with emphasis on the attenuation of sound by the liners and the determination of local versus extended reaction behavior for the boundary condition. A search technique was developed to find the complex eigenvalues for a liner under the assumption of a locally reacting boundary condition.

  11. Understanding dynamic friction through spontaneously evolving laboratory earthquakes

    PubMed Central

    Rubino, V.; Rosakis, A. J.; Lapusta, N.

    2017-01-01

    Friction plays a key role in how ruptures unzip faults in the Earth’s crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in earthquake science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-earthquakes in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with rate-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding earthquake hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive models of the earthquake source. PMID:28660876

  12. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  13. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  14. MRI tools for assessment of microstructure and nephron function of the kidney.

    PubMed

    Xie, Luke; Bennett, Kevin M; Liu, Chunlei; Johnson, G Allan; Zhang, Jeff Lei; Lee, Vivian S

    2016-12-01

    MRI can provide excellent detail of renal structure and function. Recently, novel MR contrast mechanisms and imaging tools have been developed to evaluate microscopic kidney structures including the tubules and glomeruli. Quantitative MRI can assess local tubular function and is able to determine the concentrating mechanism of the kidney noninvasively in real time. Measuring single nephron function is now a near possibility. In parallel to advancing imaging techniques for kidney microstructure is a need to carefully understand the relationship between the local source of MRI contrast and the underlying physiological change. The development of these imaging markers can impact the accurate diagnosis and treatment of kidney disease. This study reviews the novel tools to examine kidney microstructure and local function and demonstrates the application of these methods in renal pathophysiology. Copyright © 2016 the American Physiological Society.

  15. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  16. Searching the Gamma-Ray Sky for Counterparts to Gravitational Wave Sources Fermi Gamma-Ray Burst Monitor and Large Area Telescope Observations of LVT151012 and GW151226

    NASA Technical Reports Server (NTRS)

    Racusin, J. L.; Burns, E.; Goldstein, A.; Connaughton, V.; Wilson-Hodge, C. A.; Jenke, P.; Blackburn, L.; Briggs, M. S.; Broida, J.; Camp, J.; hide

    2017-01-01

    We present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger event GW151226 and candidate LVT151012. At the time of the LIGO triggers on LVT151012 and GW151226, GBM was observing 68% and 83% of the localization regions, and LAT was observing 47% and 32%, respectively. No candidate electromagnetic counterparts were detected by either the GBM or LAT. We present a detailed analysis of the GBM and LAT data over a range of timescales from seconds to years, using automated pipelines and new techniques for characterizing the flux upper bounds across large areas of the sky. Due to the partial GBM and LAT coverage of the large LIGO localization regions at the trigger times for both events, differences in source distances and masses, as well as the uncertain degree to which emission from these sources could be beamed, these non-detections cannot be used to constrain the variety of theoretical models recently applied to explain the candidate GBM counterpart to GW150914.

  17. GIS-based multielement source analysis of dustfall in Beijing: A study of 40 major and trace elements.

    PubMed

    Luo, Nana; An, Li; Nara, Atsushi; Yan, Xing; Zhao, Wenji

    2016-06-01

    Dust, as an important carrier of inorganic and organic pollutants, daily exposes to human without any protection. It affects our health adversely, especially its chemical elements and ions. In this research, we investigated the chemical characteristics of dustfall in Beijing, specifically in terms of 40 major and trace elements, and presented semi-quantitative evaluations of the relative local and remote contributions. In total, 58 samples were collected in Beijing and nearby cities during 2013-2014 "the winter heating period". Using multiple statistical methods and GIS techniques, we obtained the relative similarities among certain elements and identified their pollution sources (from local or nearby cities). And more interestingly, the relative contributions of nearby cities can be calculated by the Hysplit4 backward-trajectory model. In addition, the correlation analysis for the 40 elements in dust and soil indicated that traffic restricted interchange between them; the city center, with the heaviest traffic, had the most significant influence. Finally, the resulting source apportionment was examined and modified using land use data and terrain information. We hope it can provide a strong basis for the environmental protection and risk assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Searching the Gamma-Ray Sky for Counterparts to Gravitational Wave Sources: FERMI Gamma Ray Burst MONITO R and Large Area Telescope Observations of LVT151012 and GW151226

    DOE PAGES

    Racusin, J. L.; Burns, E.; Goldstein, A.; ...

    2017-01-19

    Here, we present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger event GW151226 and candidate LVT151012. At the time of the LIGO triggers on LVT151012 and GW151226, GBM was observing 68% and 83% of the localization regions, and LAT was observing 47% and 32%, respectively. No candidate electromagnetic counterparts were detected by either the GBM or LAT. We present a detailed analysis of the GBM and LAT data over a range of timescales from seconds to years, using automated pipelines and new techniques for characterizing the flux upper boundsmore » across large areas of the sky. Finally, due to the partial GBM and LAT coverage of the large LIGO localization regions at the trigger times for both events, differences in source distances and masses, as well as the uncertain degree to which emission from these sources could be beamed, these non-detections cannot be used to constrain the variety of theoretical models recently applied to explain the candidate GBM counterpart to GW150914.« less

  19. SEARCHING THE GAMMA-RAY SKY FOR COUNTERPARTS TO GRAVITATIONAL WAVE SOURCES: FERMI GAMMA-RAY BURST MONITO R AND LARGE AREA TELESCOPE OBSERVATIONS OF LVT151012 AND GW151226

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Racusin, J. L.; Camp, J.; Singer, L.

    2017-01-20

    We present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger event GW151226 and candidate LVT151012. At the time of the LIGO triggers on LVT151012 and GW151226, GBM was observing 68% and 83% of the localization regions, and LAT was observing 47% and 32%, respectively. No candidate electromagnetic counterparts were detected by either the GBM or LAT. We present a detailed analysis of the GBM and LAT data over a range of timescales from seconds to years, using automated pipelines and new techniques for characterizing the flux upper bounds acrossmore » large areas of the sky. Due to the partial GBM and LAT coverage of the large LIGO localization regions at the trigger times for both events, differences in source distances and masses, as well as the uncertain degree to which emission from these sources could be beamed, these non-detections cannot be used to constrain the variety of theoretical models recently applied to explain the candidate GBM counterpart to GW150914.« less

  20. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  1. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  2. Investigations on the effect of frequency and noise in a localization technique based on microwave imaging for an in-body RF source

    NASA Astrophysics Data System (ADS)

    Chandra, Rohit; Balasingham, Ilangko

    2015-05-01

    Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.

  3. Distributed Transforms for Efficient Data Gathering in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Ortega, Antonio (Inventor); Shen, Godwin (Inventor); Narang, Sunil K. (Inventor); Perez-Trufero, Javier (Inventor)

    2014-01-01

    Devices, systems, and techniques for data collecting network such as wireless sensors are disclosed. A described technique includes detecting one or more remote nodes included in the wireless sensor network using a local power level that controls a radio range of the local node. The technique includes transmitting a local outdegree. The local outdegree can be based on a quantity of the one or more remote nodes. The technique includes receiving one or more remote outdegrees from the one or more remote nodes. The technique includes determining a local node type of the local node based on detecting a node type of the one or more remote nodes, using the one or more remote outdegrees, and using the local outdegree. The technique includes adjusting characteristics, including an energy usage characteristic and a data compression characteristic, of the wireless sensor network by selectively modifying the local power level and selectively changing the local node type.

  4. Alternative control techniques document: NOx emissions from industrial/commercial/institutional (ICI) boilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    Industrial, commercial, and institutional (ICI) boilers have been identified as a category that emits more than 25 tons of oxides of nitrogen (NOx) per year. This alternative control techniques (ACT) document provides technical information for use by State and local agencies to develop and implement regulatory programs to control NOx emissions from ICI boilers. Additional ACT documents are being developed for other stationary source categories. Chapter 2 summarizes the findings of this study. Chapter 3 presents information on the ICI boiler types, fuels, operation, and industry applications. Chapter 4 discusses NOx formation and uncontrolled NOx emission factors. Chapter 5 coversmore » alternative control techniques and achievable controlled emission levels. Chapter 6 presents the cost and cost effectiveness of each control technique. Chapter 7 describes environmental and energy impacts associated with implementing the NOx control techniques. Finally, Appendices A through G provide the detailed data used in this study to evaluate uncontrolled and controlled emissions and the costs of controls for several retrofit scenarios.« less

  5. Information Processing Techniques Program. Volume 1. Packet Speech Systems Technology

    DTIC Science & Technology

    1980-03-31

    DMA transfer is enabled from the 2652 serial I/O device to the buffer memory. This enables automatic recep- tion of an incoming packet without (’PU...conference speaker. Producing multiple copies at the source wastes network bandwidth and is likely to cause local overload conditions for a large... wasted . If the setup fails because ST can fird no route with sufficient capacity, the phone will have rung and possibly been answered 18 but the call will

  6. Coherent Optical Adaptive Techniques (COAT)

    DTIC Science & Technology

    1973-02-01

    quarter wave plate and frequency shifter twice. The polarization-rotated wave is then partially reflected by the beam - splitters B,, B2 , B to provide a...between the beam splitters B, and Bp. This causes a change in the relative phase of the local oscillator to the detectors and, consequently, a change in...trackinr. The basic method is illustrated in Figure T-l. There, an array of laser beams , derived from a single laser source, is shown with provision

  7. Spatially-resolved probing of biological phantoms by point-radiance spectroscopy

    NASA Astrophysics Data System (ADS)

    Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.

    2011-03-01

    Interstitial fiber-optic based strategies for therapy monitoring and assessment rely on detecting treatment-induced changes in the light distribution in biological tissues. We present an optical technique to identify spectrally and spatially specific tissue chromophores in highly scattering turbid media. Typical optical sensors measure non-directional light intensity (i.e. fluence) and require fiber translation (i.e. 3-5 positions), which is difficult to implement clinically. Point radiance spectroscopy is based on directional light collection (i.e. radiance) at a single point with a side-firing fiber that can be rotated up to 360°. A side firing fiber accepts light within a well-defined solid angle thus potentially providing an improved spatial resolution. Experimental measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43° cleaved fiber (i.e. radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid (i.e. scattering medium). Light was collected at 1-5° increments through 360°-segment. Gold nanoparticles, placed into a 3.5 mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a non-invasive optical modality for prostate cancer monitoring.

  8. Millisecond Microwave Spikes: Statistical Study and Application for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Rozhansky, I. V.; Fleishman, G. D.; Huang, G.-L.

    2008-07-01

    We analyze a dense cluster of solar radio spikes registered at 4.5-6 GHz by the Purple Mountain Observatory spectrometer (Nanjing, China), operating in the 4.5-7.5 GHz range with 5 ms temporal resolution. To handle the data from the spectrometer, we developed a new technique that uses a nonlinear multi-Gaussian spectral fit based on χ2 criteria to extract individual spikes from the originally recorded spectra. Applying this method to the experimental raw data, we eventually identified about 3000 spikes for this event, which allows us to make a detailed statistical analysis. Various statistical characteristics of the spikes have been evaluated, including the intensity distributions, the spectral bandwidth distributions, and the distribution of the spike mean frequencies. The most striking finding of this analysis is the distributions of the spike bandwidth, which are remarkably asymmetric. To reveal the underlaying microphysics, we explore the local-trap model with the renormalized theory of spectral profiles of the electron cyclotron maser (ECM) emission peak in a source with random magnetic irregularities. The distribution of the solar spike relative bandwidths calculated within the local-trap model represents an excellent fit to the experimental data. Accordingly, the developed technique may offer a new tool with which to study very low levels of magnetic turbulence in the spike sources, when the ECM mechanism of the spike cluster is confirmed.

  9. Source attribution of black carbon and its direct radiative forcing in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Wang, Hailong; Smith, Steven J.

    The source attributions for mass concentration, haze formation, transport and direct radiative forcing of black carbon (BC) in various regions of China are quantified in this study using the Community Earth System Model (CESM) with a source-tagging technique. Anthropogenic emissions are from the Community Emissions Data System that is newly developed for the Coupled Model Intercomparison Project Phase 6 (CMIP6). Over north China where the air quality is often poor, about 90 % of near-surface BC concentration is contributed by local emissions. Overall, 35 % of BC concentration over south China in winter can be attributed to emissions from north China, andmore » 19 % comes from sources outside China in spring. For other regions in China, BC is largely contributed from nonlocal sources. We further investigated potential factors that contribute to the poor air quality in China. During polluted days, a net inflow of BC transported from nonlocal source regions associated with anomalous winds plays an important role in increasing local BC concentrations. BC-containing particles emitted from East Asia can also be transported across the Pacific. Our model results show that emissions from inside and outside China are equally important for the BC outflow from East Asia, while emissions from China account for 8 % of BC concentration and 29 % in column burden in the western United States in spring. Radiative forcing estimates show that 65 % of the annual mean BC direct radiative forcing (2.2 W m −2) in China results from local emissions, and the remaining 35 % is contributed by emissions outside of China. Efficiency analysis shows that a reduction in BC emissions over eastern China could have a greater benefit for the regional air quality in China, especially in the winter haze season.« less

  10. Source attribution of black carbon and its direct radiative forcing in China

    DOE PAGES

    Yang, Yang; Wang, Hailong; Smith, Steven J.; ...

    2017-03-30

    The source attributions for mass concentration, haze formation, transport and direct radiative forcing of black carbon (BC) in various regions of China are quantified in this study using the Community Earth System Model (CESM) with a source-tagging technique. Anthropogenic emissions are from the Community Emissions Data System that is newly developed for the Coupled Model Intercomparison Project Phase 6 (CMIP6). Over north China where the air quality is often poor, about 90 % of near-surface BC concentration is contributed by local emissions. Overall, 35 % of BC concentration over south China in winter can be attributed to emissions from north China, andmore » 19 % comes from sources outside China in spring. For other regions in China, BC is largely contributed from nonlocal sources. We further investigated potential factors that contribute to the poor air quality in China. During polluted days, a net inflow of BC transported from nonlocal source regions associated with anomalous winds plays an important role in increasing local BC concentrations. BC-containing particles emitted from East Asia can also be transported across the Pacific. Our model results show that emissions from inside and outside China are equally important for the BC outflow from East Asia, while emissions from China account for 8 % of BC concentration and 29 % in column burden in the western United States in spring. Radiative forcing estimates show that 65 % of the annual mean BC direct radiative forcing (2.2 W m −2) in China results from local emissions, and the remaining 35 % is contributed by emissions outside of China. Efficiency analysis shows that a reduction in BC emissions over eastern China could have a greater benefit for the regional air quality in China, especially in the winter haze season.« less

  11. Techniques for optimizing nanotips derived from frozen taylor cones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsch, Gregory

    Optimization techniques are disclosed for producing sharp and stable tips/nanotips relying on liquid Taylor cones created from electrically conductive materials with high melting points. A wire substrate of such a material with a preform end in the shape of a regular or concave cone, is first melted with a focused laser beam. Under the influence of a high positive potential, a Taylor cone in a liquid/molten state is formed at that end. The cone is then quenched upon cessation of the laser power, thus freezing the Taylor cone. The tip of the frozen Taylor cone is reheated by the lasermore » to allow its precise localized melting and shaping. Tips thus obtained yield desirable end-forms suitable as electron field emission sources for a variety of applications. In-situ regeneration of the tip is readily accomplished. These tips can also be employed as regenerable bright ion sources using field ionization/desorption of introduced chemical species.« less

  12. A new hue capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies

    NASA Technical Reports Server (NTRS)

    Camci, C.; Kim, K.; Hippensteele, S. A.

    1992-01-01

    A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.

  13. Single particle characterization, source apportionment, and aging effects of ambient aerosols in Southern California

    NASA Astrophysics Data System (ADS)

    Shields, Laura Grace

    Composed of a mixture of chemical species and phases and existing in a variety of shapes and sizes, atmospheric aerosols are complex and can have serious influence on human health, the environment, and climate. In order to better understand the impact of aerosols on local to global scales, detailed measurements on the physical and chemical properties of ambient particles are essential. In addition, knowing the origin or the source of the aerosols is important for policymakers to implement targeted regulations and effective control strategies to reduce air pollution in their region. One of the most ground breaking techniques in aerosol instrumentation is single particle mass spectrometry (SPMS), which can provide online chemical composition and size information on the individual particle level. The primary focus of this work is to further improve the ability of one specific SPMS technique, aerosol time-of-flight mass spectrometry (ATOFMS), for the use of identifying the specific origin of ambient aerosols, which is known as source apportionment. The ATOFMS source apportionment method utilizes a library of distinct source mass spectral signatures to match the chemical information of the single ambient particles. The unique signatures are obtained in controlled source characterization studies, such as with the exhaust emissions of heavy duty diesel vehicles (HDDV) operating on a dynamometer. The apportionment of ambient aerosols is complicated by the chemical and physical processes an individual particle can undergo as it spends time in the atmosphere, which is referred to as "aging" of the aerosol. Therefore, the performance of the source signature library technique was investigated on the ambient dataset of the highly aged environment of Riverside, California. Additionally, two specific subsets of the Riverside dataset (ultrafine particles and particles containing trace metals), which are known to cause adverse health effects, were probed in greater detail. Finally, the impact of large wildfires on the ambient levels of particulate matter in Southern California is discussed. The results of this work provide insight into single particles impacting the Southern California region, the relative source contributions to this region, and finally an examination of how atmospheric aging influences the ability to perform source apportionment.

  14. Portable microcontroller-based instrument for near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Giardini, Mario E.; Corti, Mario; Lago, Paolo; Gelmetti, Andrea

    2000-05-01

    Near IR Spectroscopy (NIRS) can be employed to noninvasively and continuously measure in-vivo local changes in haemodynamics and oxygenation of human tissues. In particular, the technique can be particularly useful for muscular functional monitoring. We present a portable NIRS research-grade acquisition system prototype, strictly dedicate to low-noise measurements during muscular exercise. The prototype is able to control four LED sources and a detector. Such a number of sources allows for multipoint measurements or for multi-wavelength spectroscopy of tissue constituents other than oxygen, such as cytochrome aa3 oxidation. The LEDs and the detector are mounted on separate probes, which carry also the relevant drivers and preamplifiers. By employing surface-mount technologies, probe size and weight are kept to a minimum. A single-chip mixed-signal RISC microcontroller performs source-to- detector multiplexing with a digital correlation technique. The acquired data are stored on an on-board 64 K EEPROM bank, and can be subsequently uploaded to a personal computer via serial port for further analysis. The resulting instrument is compact and lightweight. Preliminary test of the prototype on oxygen consumption during tourniquet- induced forearm ischaemia show adequate detectivity and time response.

  15. Time-resolved multicolor two-photon excitation fluorescence microscopy of cells and tissues

    NASA Astrophysics Data System (ADS)

    Zheng, Wei

    2014-11-01

    Multilabeling which maps the distribution of different targets is an indispensable technique in many biochemical and biophysical studies. Two-photon excitation fluorescence (TPEF) microscopy of endogenous fluorophores combining with conventional fluorescence labeling techniques such as genetically encoded fluorescent protein (FP) and fluorescent dyes staining could be a powerful tool for imaging living cells. However, the challenge is that the excitation and emission wavelength of these endogenous fluorophores and fluorescent labels are very different. A multi-color ultrafast source is required for the excitation of multiple fluorescence molecules. In this study, we developed a two-photon imaging system with excitations from the pump femtosecond laser and the selected supercontinuum generated from a photonic crystal fiber (PCF). Multiple endogenous fluorophores, fluorescent proteins and fluorescent dyes were excited in their optimal wavelengths simultaneously. A time- and spectral-resolved detection system was used to record the TPEF signals. This detection technique separated the TPEF signals from multiple sources in time and wavelength domains. Cellular organelles such as nucleus, mitochondria, microtubule and endoplasmic reticulum, were clearly revealed in the TPEF images. The simultaneous imaging of multiple fluorophores of cells will greatly aid the study of sub-cellular compartments and protein localization.

  16. Fourier descriptor analysis and unification of voice range profile contours: method and applications.

    PubMed

    Pabon, Peter; Ternström, Sten; Lamarche, Anick

    2011-06-01

    To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.

  17. Capillary plasma jet: A low volume plasma source for life science applications

    NASA Astrophysics Data System (ADS)

    Topala, I.; Nagatsu, M.

    2015-02-01

    In this letter, we present results from multispectroscopic analysis of protein films, after exposure to a peculiar plasma source, i.e., the capillary plasma jet. This plasma source is able to generate very small pulsed plasma volumes, in kilohertz range, with characteristic dimensions smaller than 1 mm. This leads to specific microscale generation and transport of all plasma species. Plasma diagnosis was realized using general electrical and optical methods. Depending on power level and exposure duration, this miniature plasma jet can induce controllable modifications to soft matter targets. Detailed discussions on protein film oxidation and chemical etching are supported by results from absorption, X-ray photoelectron spectroscopy, and microscopy techniques. Further exploitation of principles presented here may consolidate research interests involving plasmas in biotechnologies and plasma medicine, especially in patterning technologies, modified biomolecule arrays, and local chemical functionalization.

  18. Tutorial on the Psychophysics and Technology of Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1998-01-01

    Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues. The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which the localization of real and synthesized stimuli are directly compared in psychophysical studies. To this end, the results of psychophysical experiments examining the perceptual validity of the synthesis technique will be reviewed and factors that can enhance perceptual accuracy and realism will be discussed. Of particular interest is the relationship between individual differences in HRTFs and in behavior, the role of reverberant cues in reducing the perceptual errors observed with virtual sound sources, and the importance of developing perceptually valid methods of simplifying the synthesis technique. Recent attempts to implement the synthesis technique in real time systems will also be discussed and an attempt made to interpret their quoted system specifications in terms of perceptual performance. Finally, some critical research and technology development issues for the future will be outlined.

  19. Water-sanitation-hygiene mapping: an improved approach for data collection at local level.

    PubMed

    Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí

    2013-10-01

    Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  1. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  2. Instantaneous phase estimation to measure weak velocity variations: application to noise correlation on seismic data at the exploration scale

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubucq, D.

    2010-12-01

    Passive imaging from noise cross-correlation is a consolidated analysis applied at continental and regional scale whereas its use at local scale for seismic exploration purposes is still uncertain. The development of passive imaging by cross-correlation analysis is based on the extraction of the Green’s function from seismic noise data. In a completely random field in time and space, the cross-correlation permits to retrieve the complete Green’s function whatever the complexity of the medium. At the exploration scale and at frequency above 2 Hz, the noise sources are not ideally distributed around the stations which strongly affect the extraction of the direct arrivals from the noise cross-correlation process. In order to overcome this problem, the coda waves extracted from noise correlation could be useful. Coda waves describe long and scattered paths sampling the medium in different ways such that they become sensitive to weak velocity variations without being dependent on the noise source distribution. Indeed, scatters in the medium behave as a set of secondary noise sources which randomize the spatial distribution of noise sources contributing to the coda waves in the correlation process. We developed a new technique to measure weak velocity changes based on the computation of the local phase variations (instantaneous phase variation or IPV) of the cross-correlated signals. This newly-developed technique takes advantage from the doublet and stretching techniques classically used to monitor weak velocity variation from coda waves. We apply IPV to data acquired in Northern America (Canada) on a 1-km side square seismic network laid out by 397 stations. Data used to study temporal variations are cross-correlated signals computed on 10-minutes ambient noise in the frequency band 2-5 Hz. As the data set was acquired over five days, about 660 files are processed to perform a complete temporal analysis for each stations pair. The IPV permits to estimate the phase shift all over the signal length without any assumption on the medium velocity. The instantaneous phase is computed using the Hilbert transform of the signal. For each stations pair, we measure the phase difference between successive correlation functions calculated for 10 minutes of ambient noise. We then fit the instantaneous phase shift using a first-order polynomial function. The measure of the velocity variation corresponds to the slope of this fit. Compared to other techniques, the advantage of IPV is a very fast procedure which efficiently provides the measure of velocity variation on large data sets. Both experimental results and numerical tests on synthetic signals will be presented to assess the reliability of the IPV technique, with comparison to the doublet and stretching methods.

  3. Setting priorities for research on pollution reduction functions of agricultural buffers.

    PubMed

    Dosskey, Michael G

    2002-11-01

    The success of buffer installation initiatives and programs to reduce nonpoint source pollution of streams on agricultural lands will depend the ability of local planners to locate and design buffers for specific circumstances with substantial and predictable results. Current predictive capabilities are inadequate, and major sources of uncertainty remain. An assessment of these uncertainties cautions that there is greater risk of overestimating buffer impact than underestimating it. Priorities for future research are proposed that will lead more quickly to major advances in predictive capabilities. Highest priority is given for work on the surface runoff filtration function, which is almost universally important to the amount of pollution reduction expected from buffer installation and for which there remain major sources of uncertainty for predicting level of impact. Foremost uncertainties surround the extent and consequences of runoff flow concentration and pollutant accumulation. Other buffer functions, including filtration of groundwater nitrate and stabilization of channel erosion sources of sediments, may be important in some regions. However, uncertainty surrounds our ability to identify and quantify the extent of site conditions where buffer installation can substantially reduce stream pollution in these ways. Deficiencies in predictive models reflect gaps in experimental information as well as technology to account for spatial heterogeneity of pollutant sources, pathways, and buffer capabilities across watersheds. Since completion of a comprehensive watershed-scale buffer model is probably far off, immediate needs call for simpler techniques to gage the probable impacts of buffer installation at local scales.

  4. An application of the theory of planned behaviour to study the influencing factors of participation in source separation of food waste.

    PubMed

    Karim Ghani, Wan Azlina Wan Ab; Rusli, Iffah Farizan; Biak, Dayang Radiah Awang; Idris, Azni

    2013-05-01

    Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public's view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public's involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. OpenNFT: An open-source Python/Matlab framework for real-time fMRI neurofeedback training based on activity, connectivity and multivariate pattern analysis.

    PubMed

    Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van

    2017-08-01

    Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.

    PubMed

    Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh

    2018-04-27

    Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.

  7. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  8. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  9. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  10. Taking potential probability function maps to the local scale and matching them with land use maps

    NASA Astrophysics Data System (ADS)

    Garg, Saryu; Sinha, Vinayak; Sinha, Baerbel

    2013-04-01

    Source-Receptor models have been developed using different methods. Residence-time weighted concentration back trajectory analysis and Potential Source Contribution Function (PSCF) are the two most popular techniques for identification of potential sources of a substance in a defined geographical area. Both techniques use back trajectories calculated using global models and assign values of probability/concentration to various locations in an area. These values represent the probability of threshold exceedances / the average concentration measured at the receptor in air masses with a certain residence time over a source area. Both techniques, however, have only been applied to regional and long-range transport phenomena due to inherent limitation with respect to both spatial accuracy and temporal resolution of the of back trajectory calculations. Employing the above mentioned concepts of residence time weighted concentration back-trajectory analysis and PSCF, we developed a source-receptor model capable of identifying local and regional sources of air pollutants like Particulate Matter (PM), NOx, SO2 and VOCs. We use 1 to 30 minute averages of concentration values and wind direction and speed from a single receptor site or from multiple receptor sites to trace the air mass back in time. The model code assumes all the atmospheric transport to be Lagrangian and linearly extrapolates air masses reaching the receptor location, backwards in time for a fixed number of steps. We restrict the model run to the lifetime of the chemical species under consideration. For long lived species the model run is limited to < 4 hrs as spatial uncertainty increases the longer an air mass is linearly extrapolated back in time. The final model output is a map, which can be compared with the local land use map to pinpoint sources of different chemical substances and estimate their source strength. Our model has flexible space- time grid extrapolation steps of 1-5 minutes and 1-5 km grid resolution. By making use of high temporal resolution data, our model can produce maps for different times of the day, thus accounting for temporal changes and activity profiles of different sources. The main advantage of our approach compared to geostationary numerical methods that interpolate measured concentration values of multiple measurement sites to produce maps (gridding) is that the maps produced are more accurate in terms of spatial identification of sources. The model was applied to isoprene and meteorological data recorded during clean post-monsoon season (1 October- 7 October, 2012) between 11 am and 4 pm at a receptor site in the North-West Indo-Gangetic Plains (IISER Mohali, 30.665° N, 76.729°E, 300 m asl), near the foothills of the Himalayan range. Considering the lifetime of isoprene, the model was run only 2 hours backward in time. The map shows highest residence time weighted concentration of isoprene (up to 3.5 ppbv) over agricultural land with high number of trees (>180 trees/gridsquare); moderate concentrations for agricultural lands with low tree density (1.5-2.5 ppbv for 250 μg/m3 for traffic hotspots in Chandigarh City are observed. Based on the validation against the land use maps, the model appears to do an excellent job in source apportionment and identifying emission hotspots. Acknowledgement: We thank the IISER Mohali Atmospheric Chemistry Facility for data and the Ministry of Human Resource Development (MHRD), India and IISER Mohali for funding the facility. Chinmoy Sarkar is acknowledged for technical support, Saryu Garg thanks the Max Planck-DST India Partner Group on Tropospheric OH reactivity and VOCs for funding the research.

  11. 3D Inversion of Natural Source Electromagnetics

    NASA Astrophysics Data System (ADS)

    Holtham, E. M.; Oldenburg, D. W.

    2010-12-01

    The superior depth of investigation of natural source electromagnetic techniques makes these methods excellent candidates for crustal studies as well as for mining and hydrocarbon exploration. The traditional natural source method, the magnetotelluric (MT) technique, has practical limitations because the surveys are costly and time consuming due to the labor intensive nature of ground based surveys. In an effort to continue to use the penetration advantage of natural sources, it has long been recognized that tipper data, the ratio of the local vertical magnetic field to the horizontal magnetic field, provide information about 3D electrical conductivity structure. It was this understanding that prompted the development of AFMAG (Audio Frequency Magnetics) and recently the new airborne Z-Axis Tipper Electromagnetic Technique (ZTEM). In ZTEM, the vertical component of the magnetic field is recorded above the entire survey area, while the horizontal fields are recorded at a ground-based reference station. MT processing techniques yield frequency domain transfer functions typically between 30-720 Hz that relate the vertical fields over the survey area to the horizontal fields at the reference station. The result is a cost effective procedure for collecting natural source EM data and for finding large scale targets at moderate depths. It is well known however that 1D layered structures produce zero vertical magnetic fields and thus ZTEM data cannot recover such background conductivities. This is in sharp contrast to the MT technique where electric fields are measured and a 1D background conductivity can be recovered from the off diagonal elements of the impedance tensor. While 1D models produce no vertical fields, two and three dimensional structures will produce anomalous currents and a ZTEM response. For such models the background conductivity structure does affect the data. In general however, the ZTEM data have weak sensitivity to the background conductivity and while we show that it is possible to obtain the background structure by inverting the ZTEM data alone, it is desirable to obtain robust background conductivity information from other sources. This information could come from a priori geologic and petrophysical information or from additional geophysical data such as MT. To counter the costly nature of large MT surveys and the limited sensitivity of the ZTEM technique to the background conductivity we show that an effective method is to collect and invert both MT and ZTEM data. A sparse MT survey grid can gather information about the background conductivity and deep structures while keeping the survey costs affordable. Higher spatial resolution at moderate depths can be obtained by flying multiple lines of ZTEM data.

  12. Testing an advanced satellite technique for dust detection as a decision support system for the air quality assessment

    NASA Astrophysics Data System (ADS)

    Falconieri, Alfredo; Filizzola, Carolina; Femiano, Rossella; Marchese, Francesco; Sannazzaro, Filomena; Pergola, Nicola; Tramutoli, Valerio; Di Muro, Ersilia; Divietri, Mariella; Crisci, Anna Maria; Lovallo, Michele; Mangiamele, Lucia; Vaccaro, Maria Pia; Palma, Achille

    2014-05-01

    In order to correctly apply the European directive for air quality (2008/50/CE), local Authorities are often requested to discriminate the possible origin (natural/anthropic) of anomalous concentration of pollutants in the air (art.20 Directive 2008/50/CE). In this framework, it's been focused on PM10 and PM2,5 concentrations and sources. In fact, depending on their origin, appropriate counter-measures can be taken devoted to prevent their production (e.g. by traffic restriction) or simply to reduce their impact on citizen health (e.g. information campaigns). In this context suitable satellite techniques can be used in order to identify natural sources (particularly Saharan dust, but also volcanic ash or forest fire smoke) that can be responsible of over-threshold concentration of PM10/2,5 in populated areas. In the framework of the NIBS (Networking and Internationalization of Basilicata Space Technologies) project, funded by the Basilicata Region within the ERDF 2007-2013 program, the School of Engineering of University of Basilicata, the Institute of Methodologies for Environmental Analysis of National Research Council (IMAA-CNR) and the Regional Agency for the Protection of the Environment of Basilicata Region (ARPAB) have started a collaboration devoted to assess the potential of the use of advanced satellite techniques for Saharan dust events identification to support ARPAB activities related to the application of the European directive for air quality (2008/50/CE) in Basilicata region. In such a joint activity, the Robust Satellite Technique (RST) approach has been assessed and tested as a decision support system for monitoring and evaluating air quality at local and regional level. In particular, RST-DUST products, derived by processing high temporal resolution data provided by SEVIRI (Spinning Enhanced Visible and Infrared Imager) sensor on board Meteosat Second Generation platforms, have been analysed together with PM10 measurements performed by the ground-based stations operated by ARPAB. Such an inter-comparison was devoted to investigate possible PM10 over-threshold occurrences and to better evaluate their possible causes (i.e. anthropogenic and/or natural sources). The analysis demonstrated the added value of an independent, automatic and unsupervised satellite based system (capable of discriminating over-threshold PM10 data produced by natural source from the ones occurred because of anthropogenic causes) in supporting the decisions of the considered end-user (ARPAB) in a pre-operational context.

  13. Analysis and prediction of flow from local source in a river basin using a Neuro-fuzzy modeling tool.

    PubMed

    Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi

    2007-10-01

    Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.

  14. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  15. Authentication techniques for smart cards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.A.

    1994-02-01

    Smart card systems are most cost efficient when implemented as a distributed system, which is a system without central host interaction or a local database of card numbers for verifying transaction approval. A distributed system, as such, presents special card and user authentication problems. Fortunately, smart cards offer processing capabilities that provide solutions to authentication problems, provided the system is designed with proper data integrity measures. Smart card systems maintain data integrity through a security design that controls data sources and limits data changes. A good security design is usually a result of a system analysis that provides a thoroughmore » understanding of the application needs. Once designers understand the application, they may specify authentication techniques that mitigate the risk of system compromise or failure. Current authentication techniques include cryptography, passwords, challenge/response protocols, and biometrics. The security design includes these techniques to help prevent counterfeit cards, unauthorized use, or information compromise. This paper discusses card authentication and user identity techniques that enhance security for microprocessor card systems. It also describes the analysis process used for determining proper authentication techniques for a system.« less

  16. Global Dynamic Exposure and the OpenBuildingMap

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.

    2015-12-01

    Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.

  17. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  18. Local sources of black walnut recommended for planting in Maryland

    Treesearch

    Silas Little; Calvin F. Bey; Daniel McConaughy

    1974-01-01

    After 5 years, local black walnut seedlings were taller than those of 12 out-of-state sources in a Maryland planting. Seedlings from south-of-local sources out grew trees from northern sources. Genetic influence on height was expressed early--with little change in ranking of sources after the third year.

  19. Transported vs. local contributions from secondary and biomass burning sources to PM2.5

    NASA Astrophysics Data System (ADS)

    Kim, Bong Mann; Seo, Jihoon; Kim, Jin Young; Lee, Ji Yi; Kim, Yumi

    2016-11-01

    The concentration of fine particulates in Seoul, Korea has been lowered over the past 10 years, as a result of the city's efforts in implementing environmental control measures. Yet, the particulate concentration level in Seoul remains high as compared to other urban areas globally. In order to further improve fine particulate air quality in the Korea region and design a more effective control strategy, enhanced understanding of the sources and contribution of fine particulates along with their chemical compositions is necessary. In turn, relative contributions from local and transported sources on Seoul need to be established, as this city is particularly influenced by sources from upwind geographic areas. In this study, PM2.5 monitoring was conducted in Seoul from October 2012 to September 2013. PM2.5 mass concentrations, ions, metals, organic carbon (OC), elemental carbon (EC), water soluble OC (WSOC), humic-like substances of carbon (HULIS-C), and 85 organic compounds were chemically analyzed. The multivariate receptor model SMP was applied to the PM2.5 data, which then identified nine sources and estimated their source compositions as well as source contributions. Prior studies have identified and quantified the transported and local sources. However, no prior studies have distinguished contributions of an individual source between transported contribution and locally produced contribution. We differentiated transported secondary and biomass burning sources from the locally produced secondary and biomass burning sources, which was supported with potential source contribution function (PSCF) analysis. Of the total secondary source contribution, 32% was attributed to transported secondary sources, and 68% was attributed to locally formed secondary sources. Meanwhile, the contribution from the transported biomass burning source was revealed as 59% of the total biomass burning contribution, which was 1.5 times higher than that of the local biomass burning source. Four-season average source contributions from the transported and the local sources were 28% and 72%, respectively.

  20. Detecting large-scale networks in the human brain using high-density electroencephalography.

    PubMed

    Liu, Quanying; Farahibozorg, Seyedehrezvan; Porcaro, Camillo; Wenderoth, Nicole; Mantini, Dante

    2017-09-01

    High-density electroencephalography (hdEEG) is an emerging brain imaging technique that can be used to investigate fast dynamics of electrical activity in the healthy and the diseased human brain. Its applications are however currently limited by a number of methodological issues, among which the difficulty in obtaining accurate source localizations. In particular, these issues have so far prevented EEG studies from reporting brain networks similar to those previously detected by functional magnetic resonance imaging (fMRI). Here, we report for the first time a robust detection of brain networks from resting state (256-channel) hdEEG recordings. Specifically, we obtained 14 networks previously described in fMRI studies by means of realistic 12-layer head models and exact low-resolution brain electromagnetic tomography (eLORETA) source localization, together with independent component analysis (ICA) for functional connectivity analysis. Our analyses revealed three important methodological aspects. First, brain network reconstruction can be improved by performing source localization using the gray matter as source space, instead of the whole brain. Second, conducting EEG connectivity analyses in individual space rather than on concatenated datasets may be preferable, as it permits to incorporate realistic information on head modeling and electrode positioning. Third, the use of a wide frequency band leads to an unbiased and generally accurate reconstruction of several network maps, whereas filtering data in a narrow frequency band may enhance the detection of specific networks and penalize that of others. We hope that our methodological work will contribute to rise of hdEEG as a powerful tool for brain research. Hum Brain Mapp 38:4631-4643, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. The Use of Lead Isotope and Rare Earth Element Geochemistry for Forensic Geographic Provenancing

    NASA Astrophysics Data System (ADS)

    Carey, A.; Darrah, T.; Harrold, Z.; Prutsman-Pfeiffer, J.; Poreda, R.

    2008-12-01

    Lead isotope and rare earth element composition of modern human bones are analyzed to explore their utility for geographical provenancing. DNA analysis is the standard for identification of individuals. DNA analysis requires a DNA match for comparison. Thus, DNA analysis is of limited use in cases involving unknown remains. Trace elements are incorporated into bones and teeth during biomineralization, recording the characteristics of an individual's geochemical environment. Teeth form during adolescence, recording the geochemical environment of an individual's youth. Bones remodel throughout an individual's lifetime. Bones consist of two types of bone tissue (cortical and trabecular) that remodel at different rates, recording the geochemical environment at the time of biomineralization. Cortical bone tissue, forming the outer surface of bones, is dense, hard tissue that remodels in 25-30 yrs. Conversely, trabecular bone tissue, the inner cavity of bones, is low density, porous and remodels in 2-5 years. Thus, analyzing teeth and both bone tissues allows for the development of a geographical time line capable of tracking immigration patterns through time instead of only an individual's youth. Geochemical isotopic techniques (Sr, O, C, N) have been used for geographical provenancing in physical anthropology. The isotopic values of Sr, C, O, N are predominantly a function of soil compositions in areas where food is grown or water is consumed. Application of these provenancing techniques has become difficult as an individual's diet may reflect the isotopic composition of foods obtained at the local grocer as opposed to local soil compositions. Thus, we explore the use of REEs and Pb isotopes for geographical provenancing. Pb and REEs are likely more reliable indicators of modern geographical location as their composition are high in bio-available sources such as local soils, atmospheric aerosols, and dust as opposed to Sr, C, O, N that are controlled by food and drinking water. Lead isotope and REE analysis of trabecular and cortical bone tissue of 60 femoral heads resected during hip replacement surgery at the Univ. of Roch. Medical Center were analyzed by a combination of TIMS and ICP-MS. Results show that Pb compositions are consistent with local soil with variable inputs from known environmental sources. Several samples demonstrate inputs from known environmental sources (e.g. Mississippi Valley ore) that was used in paint, solder, and US gasoline. Additionally, results suggest bioincorporation of Pb with isotopic composition consistent with that observed for Canadian gasoline aerosols. Immigrants included in the study show Pb compositions distinctly different than local residents.

  2. Discrimination between long-range transport and local pollution sources and precise delineation of polluted soil layers using integrated geophysical-geochemical methods.

    NASA Astrophysics Data System (ADS)

    Magiera, Tadeusz; Szuszkiewisz, Marcin; Szuszkiewicz, Maria; Żogała, Bogdan

    2017-04-01

    The primary goal of this work was to distinguish between soil pollution from long-range and local transport of atmospheric pollutants using soil magnetometry in combination with geochemical analyses and precise delineation of polluted soil layers by using integrated magnetic (surface susceptibility, gradiometric measurement) and other geophysical techniques (conductivity and electrical resistivity tomography). The study area was located in the Izery region of Poland (within the "Black Triangle" region, which is the nickname for one of Europe's most polluted areas, where Germany, Poland and the Czech Republic meet). The study area was located in the Forest Glade where the historical local pollution source (glass factory) was active since and of 18th until the end of 19th century. The magnetic signal here was the combination of long-range transport of magnetic particles, local deposition and anthropogenic layers containing ashes and slags and partly comprising the subsoil of modern soil. Application of the set of different geophysical techniques enabled the precise location of these layers. The effect of the long-range pollution transport was observed on a neighboring hill (Granicznik) of which the western, northwestern and southwestern parts of the slope were exposed to the transport of atmospheric pollutants from the Czech Republic and Germany and Poland. Using soil magnetometry, it was possible to discriminate between long-range transport of atmospheric pollutants and anthropogenic pollution related to the former glasswork located in the Forest Glade. The magnetic susceptibility values (κ) as well as the number of "hot-spots" of volume magnetic susceptibility is significantly larger in the Forest Glade than on the Granicznik Hill where the κ is < 20 ×10-5 SI units. Generally, the western part of the Granicznik Hill is characterized by about two times higher k values than the southeastern part. This trend is attributed to the fact that the western part was subjected mostly to the long-range pollution originating from lignite power plants along the Polish border, while the southeastern part of the hill was shielded by crag and tail formation. Also the set of chemical elements connected with magnetic particles from long-range transport observed on the western slope an the top of Granicznik Hill (As, Cd, Hg, In, Mo, Sb, Se and U) is different than this observed on the Forest Glad connected with local pollution source (Cu, Nb, Ni, Pb, Sn and Zn).

  3. Site specific probabilistic seismic hazard analysis at Dubai Creek on the west coast of UAE

    NASA Astrophysics Data System (ADS)

    Shama, Ayman A.

    2011-03-01

    A probabilistic seismic hazard analysis (PSHA) was conducted to establish the hazard spectra for a site located at Dubai Creek on the west coast of the United Arab Emirates (UAE). The PSHA considered all the seismogenic sources that affect the site, including plate boundaries such as the Makran subduction zone, the Zagros fold-thrust region and the transition fault system between them; and local crustal faults in UAE. PSHA indicated that local faults dominate the hazard. The peak ground acceleration (PGA) for the 475-year return period spectrum is 0.17 g and 0.33 g for the 2,475-year return period spectrum. The hazard spectra are then employed to establish rock ground motions using the spectral matching technique.

  4. MR-based source localization for MR-guided HDR brachytherapy

    NASA Astrophysics Data System (ADS)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  5. Recording event-related activity under hostile magnetic resonance environment: Is multimodal EEG/ERP-MRI recording possible?

    PubMed

    Karakaş, H M; Karakaş, S; Ozkan Ceylan, A; Tali, E T

    2009-08-01

    Event-related potentials (ERPs) have high temporal resolution, but insufficient spatial resolution; the converse is true for the functional imaging techniques. The purpose of the study was to test the utility of a multimodal EEG/ERP-MRI technique which combines electroencephalography (EEG) and magnetic resonance imaging (MRI) for a simultaneously high temporal and spatial resolution. The sample consisted of 32 healthy young adults of both sexes. Auditory stimuli were delivered according to the active and passive oddball paradigms in the MRI environment (MRI-e) and in the standard conditions of the electrophysiology laboratory environment (Lab-e). Tasks were presented in a fixed order. Participants were exposed to the recording environments in a counterbalanced order. EEG data were preprocessed for MRI-related artifacts. Source localization was made using a current density reconstruction technique. The ERP waveforms for the MRI-e were morphologically similar to those for the Lab-e. The effect of the recording environment, experimental paradigm and electrode location were analyzed using a 2x2x3 analysis of variance for repeated measures. The ERP components in the two environments showed parametric variations and characteristic topographical distributions. The calculated sources were in line with the related literature. The findings indicated effortful cognitive processing in MRI-e. The study provided preliminary data on the feasibility of the multimodal EEG/ERP-MRI technique. It also indicated lines of research that are to be pursued for a decisive testing of this technique and its implementation to clinical practice.

  6. Carbon-13 natural abundance signatures of long-chain fatty acids to determinate sediment origin: A case study in northeast Austria

    NASA Astrophysics Data System (ADS)

    Mabit, Lionel; Gibbs, Max; Meusburger, Katrin; Toloza, Arsenio; Resch, Christian; Klik, Andreas; Swales, Andrew; Alewell, Christine

    2016-04-01

    - Several recently published information from scientific research have highlighted that compound-specific stable isotope (CSSI) signatures of fatty acids (FAs) based on the measurement of carbon-13 natural abundance signatures showed great promises to identify sediment origin. The authors have used this innovative isotopic approach to investigate the sources of sediment in a three hectares Austrian sub-watershed (i.e. Mistelbach). Through a previous study using the Cs-137 technique, Mabit et al. (Geoderma, 2009) reported a local maximum sedimentation rate reaching 20 to 50 t/ha/yr in the lowest part of this watershed. However, this study did not identify the sources. Subsequently, the deposited sediment at its outlet (i.e. the sediment mixture) and representative soil samples from the four main agricultural fields - expected to be the source soils - of the site were investigated. The bulk delta carbon-13 of the samples and two long-chain FAs (i.e. C22:0 and C24:0) allowed the best statistical discrimination. Using two different mixing models (i.e. IsoSource and CSSIAR v1.00) and the organic carbon content of the soil sources and sediment mixture, the contribution of each source has been established. Results suggested that the grassed waterway contributed to at least 50% of the sediment deposited at the watershed outlet. This study, that will require further validation, highlights that CSSI and Cs-137 techniques are complementary as fingerprints and tracers for establishing land sediment redistribution and could provide meaningful information for optimized decision-making by land managers.

  7. "Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado

    NASA Astrophysics Data System (ADS)

    DeMets, C. M.

    2012-12-01

    Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments for quantitatively determining a generalized answer, it is "currently impossible to state categorically whether or not local food systems emit fewer greenhouse gases than non-local food systems" (Edwards-Jones et al, 2008). Even so, numerous researchers have shown that "83 percent of emissions occur before food even leaves the farm gate" (Weber and Matthews, Garnett, cited in DeWeerdt, 2011); while this doesn't provide any information in terms of local vs. non-local, it is significant when viewed in light of the fact that local farmers tend to have much greater transparency and accountability in their agricultural practices. In other words, "a farmer who sells in the local food economy might be more likely to adopt or continue sustainable practices in order to meet…customer demand" (DeWeerdt, 2011), among other reasons such as environmental concern and desire to support the local economy (DeWeerdt, 2009). In identifying solutions to barriers to locally sourcing food, this research will enable restaurants to overcome these barriers and source their food locally, thereby supporting farmers and their ability to maintain sustainable practices.

  8. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. 45 CFR 2551.92 - What are project funding requirements?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... local funding sources during the first three years of operations; (ii) An economic downturn, the...

  10. 45 CFR 2552.92 - What are project funding requirements?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... the development of local funding sources during the first three years of operations; or (ii) An...

  11. A modular Space Station/Base electrical power system - Requirements and design study.

    NASA Technical Reports Server (NTRS)

    Eliason, J. T.; Adkisson, W. B.

    1972-01-01

    The requirements and procedures necessary for definition and specification of an electrical power system (EPS) for the future space station are discussed herein. The considered space station EPS consists of a replaceable main power module with self-contained auxiliary power, guidance, control, and communication subsystems. This independent power source may 'plug into' a space station module which has its own electrical distribution, control, power conditioning, and auxiliary power subsystems. Integration problems are discussed, and a transmission system selected with local floor-by-floor power conditioning and distribution in the station module. This technique eliminates the need for an immediate long range decision on the ultimate space base power sources by providing capability for almost any currently considered option.

  12. Delayed plastic relaxation limit in SiGe islands grown by Ge diffusion from a local source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanacore, G. M.; Zani, M.; Tagliaferri, A., E-mail: alberto.tagliaferri@polimi.it

    2015-03-14

    The hetero-epitaxial strain relaxation in nano-scale systems plays a fundamental role in shaping their properties. Here, the elastic and plastic relaxation of self-assembled SiGe islands grown by surface-thermal-diffusion from a local Ge solid source on Si(100) are studied by atomic force and transmission electron microscopies, enabling the simultaneous investigation of the strain relaxation in different dynamical regimes. Islands grown by this technique remain dislocation-free and preserve a structural coherence with the substrate for a base width as large as 350 nm. The results indicate that a delay of the plastic relaxation is promoted by an enhanced Si-Ge intermixing, induced by themore » surface-thermal-diffusion, which takes place already in the SiGe overlayer before the formation of a critical nucleus. The local entropy of mixing dominates, leading the system toward a thermodynamic equilibrium, where non-dislocated, shallow islands with a low residual stress are energetically stable. These findings elucidate the role of the interface dynamics in modulating the lattice distortion at the nano-scale, and highlight the potential use of our growth strategy to create composition and strain-controlled nano-structures for new-generation devices.« less

  13. Towards an Empirically Based Parametric Explosion Spectral Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Walter, W R; Ruppert, S

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less

  14. Using Network Theory to Understand Seismic Noise in Dense Arrays

    NASA Astrophysics Data System (ADS)

    Riahi, N.; Gerstoft, P.

    2015-12-01

    Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.

  15. A Review of the Anaerobic Digestion of Fruit and Vegetable Waste.

    PubMed

    Ji, Chao; Kong, Chui-Xue; Mei, Zi-Li; Li, Jiang

    2017-11-01

    Fruit and vegetable waste is an ever-growing global question. Anaerobic digestion techniques have been developed that facilitate turning such waste into possible sources for energy and fertilizer, simultaneously helping to reduce environmental pollution. However, various problems are encountered in applying these techniques. The purpose of this study is to review local and overseas studies, which focus on the use of anaerobic digestion to dispose fruit and vegetable wastes, discuss the acidification problems and solutions in applying anaerobic digestion for fruit and vegetable wastes and investigate the reactor design (comparing single phase with two phase) and the thermal pre-treatment for processing raw wastes. Furthermore, it analyses the dominant microorganisms involved at different stages of digestion and suggests a focus for future studies.

  16. A Comparison of High Frequency Angle of Arrival and Ionosonde Data During a Traveling Ionospheric Disturbance

    NASA Astrophysics Data System (ADS)

    Knippling, K.; Nava, O.; Emmons, D. J., II; Dao, E. V.

    2017-12-01

    Geolocation techniques are used to track the source of uncooperative high frequency emitters. Traveling ionospheric disturbances (TIDs) make geolocation particularly difficult due to large perturbations in the local ionospheric electron density profiles. Angle of arrival(AoA) and ionosonde virtual height measurements collected at White Sands Missile Range, New Mexico in January, 2014 are analyzed during a medium scale TID (MSTID). MSTID characteristics are extracted from the measurements, and a comparison between the data sets is performed, providing a measure of the correlation as a function of distance between the ionosonde and AoA circuit midpoints. The results of this study may advance real-time geolocation techniques through the implementation of a time varying mirror model height.

  17. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    PubMed

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. © 2011 Acoustical Society of America

  18. Development of Techniques to Investigate Sonoluminescence as a Source of Energy Harvesting

    NASA Technical Reports Server (NTRS)

    Wrbanek, John D.; Fralick, Gustave C.; Wrbanek, Susan Y.

    2007-01-01

    Instrumentation techniques are being developed at NASA Glenn Research Center to measure optical, radiation, and thermal properties of the phenomena of sonoluminescence, the light generated using acoustic cavitation. Initial efforts have been directed to the generation of the effect and the imaging of the glow in water and solvents. Several images have been produced of the effect showing the location within containers, without the additions of light enhancers to the liquid. Evidence of high energy generation in the modification of thin films from sonoluminescence in heavy water was seen that was not seen in light water. Bright, localized sonoluminescence was generated using glycerin for possible applications to energy harvesting. Issues to be resolved for an energy harvesting concept will be addressed.

  19. Path-Specific Effects on Shear Motion Generation Using LargeN Array Waveform Data at the Source Physics Experiment (SPE) Site

    NASA Astrophysics Data System (ADS)

    Pitarka, A.; Mellors, R. J.; Walter, W. R.

    2016-12-01

    Depending on emplacement conditions and underground structure, and contrary to what is theoretically predicted for isotropic sources, recorded local, regional, and teleseismic waveforms from chemical explosions often contain shear waves with substantial energy. Consequently, the transportability of empirical techniques for yield estimation and source discrimination to regions with complex underground structure becomes problematic. Understanding the mechanisms of generation and conversion of shear waves caused by wave path effects during explosions can help improve techniques used in nuclear explosion monitoring. We used seismic data from LargeN, a dense array of three and one component geophones, to analyze far-field waveforms from the underground chemical explosion recorded during shot 5 of the Source Physics Experiment (SPE-5) at the Nevada National Security Site. Combined 3D elastic wave propagation modeling and frequency-wavenumber beam-forming on small arrays containing selected stations were used to detect and identify several wave phases, including primary and secondary S waves, and Rgwaves, and determine their direction of propagation. We were able to attribute key features of the waveforms, and wave phases to either source processes or propagation path effects, such as focusing and wave conversions. We also found that coda waves were more likely generated by path effects outside the source region, rather than by interaction of source generated waves with the emplacement structure. Waveform correlation and statistical analysis were performed to estimate average correlation length of small-scale heterogeneity in the upper sedimentary layers of the Yucca Flat basin in the area covered by the array. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS- 699180

  20. An integrated approach to evaluate the Aji-Chai potash resources in Iran using potential field data

    NASA Astrophysics Data System (ADS)

    Abedi, Maysam

    2018-03-01

    This work presents an integrated application of potential field data to discover potash-bearing evaporite sources in Aji-Chai salt deposit, located in east Azerbaijan province, northwest of Iran. Low density and diamagnetic effect of salt minerals, i.e. potash, give rise to promising potential field anomalies that assist to localize sought blind targets. The halokinetic-type potash-bearing salts in the prospect zone have flowed upward and intruded into surrounded sedimentary sequences dominated frequently by marl, gypsum and alluvium terraces. Processed gravity and magnetic data delineated a main potash source with negative gravity and magnetic amplitude responses. To better visualize these evaporite deposits, 3D model of density contrast and magnetic susceptibility was constructed through constrained inversion of potential field data. A mixed-norm regularization technique was taken into account to generate sharp and compact geophysical models. Since tectonic pressure causes vertical movement of the potash in the studied region, a simple vertical cylindrical shape is an appropriate geometry to simulate these geological targets. Therefore, structural index (i.e. decay rate of potential field amplitude with distance) of such assumed source was embedded in the inversion program as a geometrical constraint to image these geologically plausible sources. In addition, the top depth of the main and the adjacent sources were estimated 39 m and 22 m, respectively, via the combination of the analytic signal and the Euler deconvolution techniques. Drilling result also indicated that the main source of potash starts at a depth of 38 m. The 3D models of the density contrast and the magnetic susceptibility (assuming a superficial sedimentary cover as a hard constraint in the inversion algorithm) demonstrated that potash source has an extension in depth less than 150 m.

  1. Link-prediction to tackle the boundary specification problem in social network surveys

    PubMed Central

    De Wilde, Philippe; Buarque de Lima-Neto, Fernando

    2017-01-01

    Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826

  2. EBCO Technologies TR Cyclotrons, Dynamics, Equipment, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, R.R.; Univ British Columbia; Erdman, K. L.

    2003-08-26

    The Ebco Technologies TR cyclotrons have a common parent in the 500 MeV negative ion cyclotron at TRIUMF in Vancouver. As such, the TR cyclotrons have features that can be adapted for specific application. The cyclotron design is modularized into ion source and injection system, central region and then extraction. The cyclotron ion source is configured for cyclotron beam currents ranging from 50 microAmps to 2 milliAmps. The injection line can be operated in either continuous (CW) or in pulsed mode. The center region of the cyclotron is configured to match the ion source configuration. The extracted beams are directedmore » either to a local target station or to beam lines and thence to target stations. There has been development both in solid, liquid and gas targets. There has been development in radioisotope handling techniques, target material recovery and radiochemical synthesis.« less

  3. Final Report on DTRA Basic Research Project #BRCALL08-Per3-C-2-0006 "High-Z Non-Equilibrium Physics and Bright X-ray Sources with New Laser Targets"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Jeffrey D.

    This project had two major goals. Final Goal: obtain spectrally resolved, absolutely calibrated x-ray emission data from uniquely uniform mm-scale near-critical-density high-Z plasmas not in local thermodynamic equilibrium (LTE) to benchmark modern detailed atomic physics models. Scientific significance: advance understanding of non-LTE atomic physics. Intermediate Goal: develop new nano-fabrication techniques to make suitable laser targets that form the required highly uniform non-LTE plasmas when illuminated by high-intensity laser light. Scientific significance: advance understanding of nano-science. The new knowledge will allow us to make x-ray sources that are bright at the photon energies of most interest for testing radiation hardening technologies,more » the spectral energy range where current x-ray sources are weak. All project goals were met.« less

  4. End-to-end system test for solid-state microdosemeters.

    PubMed

    Pisacane, V L; Dolecek, Q E; Malak, H; Dicello, J F

    2010-08-01

    The gold standard in microdosemeters has been the tissue equivalent proportional counter (TEPC) that utilises a gas cavity. An alternative is the solid-state microdosemeter that replaces the gas with a condensed phase (silicon) detector with microscopic sensitive volumes. Calibrations of gas and solid-state microdosemeters are generally carried out using radiation sources built into the detector that impose restrictions on their handling, transportation and licensing in accordance with the regulations from international, national and local nuclear regulatory bodies. Here a novel method is presented for carrying out a calibration and end-to-end system test of a microdosemeter using low-energy photons as the initiating energy source, thus obviating the need for a regulated ionising radiation source. This technique may be utilised to calibrate both a solid-state microdosemeter and, with modification, a TEPC with the higher average ionisation energy of a gas.

  5. Aggregate resource availability in the conterminous United States, including suggestions for addressing shortages, quality, and environmental concerns

    USGS Publications Warehouse

    Langer, William H.

    2011-01-01

    Although potential sources of aggregate are widespread throughout the United States, many sources may not meet certain physical property requirements, such as soundness, hardness, strength, porosity, and specific gravity, or they may contain contaminants or deleterious materials that render them unusable. Encroachment by conflicting land uses, permitting considerations, environmental issues, and societal pressures can prevent or limit development of otherwise suitable aggregate. The use of sustainable aggregate resource management can help ensure an economically viable supply of aggregate. Sustainable aggregate resource management techniques that have successfully been used include (1) protecting potential resources from encroachment; (2) using marginal-quality local aggregate for applications that do not demand a high-quality resource; (3) using substitute materials such as clinker, scoria, and recycled asphalt and concrete; and (4) using rail and water to transport aggregates from remote sources.

  6. Development of a Supersonic Atomic Oxygen Nozzle Beam Source for Crossed Beam Scattering Experiments

    DOE R&D Accomplishments Database

    Sibener, S. J.; Buss, R. J.; Lee, Y. T.

    1978-05-01

    A high pressure, supersonic, radio frequency discharge nozzle beam source was developed for the production of intense beams of ground state oxygen atoms. An efficient impedance matching scheme was devised for coupling the radio frequency power to the plasma as a function of both gas pressure and composition. Techniques for localizing the discharge directly behind the orifice of a water-cooled quartz nozzle were also developed. The above combine to yield an atomic oxygen beam source which produces high molecular dissociation in oxygen seeded rare gas mixtures at total pressures up to 200 torr: 80 to 90% dissociation for oxygen/argon mixtures and 60 to 70% for oxygen/helium mixtures. Atomic oxygen intensities are found to be greater than 10{sup 17} atom sr{sup -1} sec{sup -1}. A brief discussion of the reaction dynamics of 0 + IC1 ..-->.. I0 + C1 is also presented.

  7. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  8. The Perspective of Riverbank Filtration in China

    NASA Astrophysics Data System (ADS)

    Li, J.; Teng, Y.; Zhai, Y.; Zuo, R.

    2014-12-01

    Sustainable drinking water supply can affect the health of people, and the surrounding ecosystems. According to statistics of the monitoring program of drinking water sources in 309 at or above prefecture level of China in 2013, the major pollutants index were total phosphorus, ammonia and manganese in surface drinking water sources, respectively, iron, ammonia and manganese in groundwater drinking water sources, respectively. More than 150 drinking water emergency environmental accidents happened since 2006, 52 of these accidents led to the disruption of water supply in waterworks, and a population of over ten million were affected. It indicated that there is a potential risk for people's health by the use of river water directly and it is necessary to require alternative techniques such as riverbank filtration for improving the drinking water quality. Riverbank filtration is an inexpensive natural process, not only smoothing out normal pollutant concentration found in surface water but also significantly reducing the risk from such emergency events as chemical spill into the river. Riverbank filtration technique has been used in many countries more than 100 years, including China. In China, in 1950s, the bank infiltration technique was first applied in northeast of China. Extensive bank infiltration application was conducted in 1980s, and more than 300 drinking water sources utilities bank infiltration established mainly near the Songhua River Basin, the Yellow River Basin, Haihe River Basin. However, the comparative lack of application and researches on riverbank filtration have formed critical scientific data gap in China. As the performance of riverbank filtration technique depend on not only the design and setting such as well type, pumping rate, but also the local hydrogeology and environmental properties. We recommend more riverbank filtration project and studies to be conducted to collect related significant environmental geology data in China. Additionally, the experience has demonstrated a number of water quality improvements associated with riverbank filtration. It is important to stress that the fate and behavior of emerging organic contaminants during riverbank filtration should be taken into special consideration.

  9. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-02-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations at measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment, with a decrease in RMSE by up to about -17% for SO2 and -7% for NO at measurement stations. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Reactive species are mostly sensitive to the local-scale parameters, such as the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  10. Evidences on weaknesses and strengths from health financing after decentralization: lessons from Latin American countries.

    PubMed

    Arredondo, Armando; Orozco, Emanuel; De Icaza, Esteban

    2005-01-01

    The main objective was to identify trends and evidence on health financing after health care decentralization. Evaluative research with a before-after design integrating qualitative and quantitative analysis. Taking into account feasibility, political and technical criteria, three Latin American countries were selected as study populations: Mexico, Nicaragua and Peru. The methodology had two main phases. In the first phase, the study referred to secondary sources of data and documents to obtain information about the following variables: type of decentralization implemented, source of finance, funds of financing, providers, final use of resources and mechanisms for resource allocation. In the second phase, the study referred to primary data collected in a survey of key personnel from the health sectors of each country. The trends and evidence reported in all five financing indicators may identify major weaknesses and strengths in health financing. Weaknesses: a lack of human resources trained in health economics who can implement changes, a lack of financial resource independence between the local and central levels, the negative behavior of the main macro-economic variables, and the difficulty in developing new financing alternatives. Strengths: the sharing between the central level and local levels of responsibility for financing health services, the implementation of new organizational structures for the follow-up of financial changes at the local level, the development and implementation of new financial allocation mechanisms taking as a basis the efficiency and equity principles, new technique of a per-capita adjustment factor corrected at the local health needs, and the increase of financing contributions from households and local levels of government.

  11. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  12. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  13. Nanoscale deformation analysis with high-resolution transmission electron microscopy and digital image correlation

    DOE PAGES

    Wang, Xueju; Pan, Zhipeng; Fan, Feifei; ...

    2015-09-10

    We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for themore » analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. As a result, the DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.« less

  14. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  15. Local-Scale Exposure Assessment of Air Pollutants in Source-Impacted Neighborhoods in Detroit, MI (Invited)

    NASA Astrophysics Data System (ADS)

    Vette, A. F.; Bereznicki, S.; Sobus, J.; Norris, G.; Williams, R.; Batterman, S.; Breen, M.; Isakov, V.; Perry, S.; Heist, D.; Community Action Against Asthma Steering Committee

    2010-12-01

    There has been growing interest in improving local-scale (< 1-km) exposure assessments to better understand the impact of local sources of air pollutants on adverse health outcomes. This paper describes two research studies aimed at understanding the impact of local sources contributing to spatial gradients at the neighborhood-scale in Detroit, MI. The first study, the Detroit Exposure and Aerosol Research Study (DEARS), was designed to assess the variability in concentrations of air pollutants derived from local and regional sources on community, neighborhood and personal exposures to air pollutants. Homes were identified at random in six different neighborhoods throughout Wayne County, MI that varied proximally to local industrial and mobile sources. Data were collected in summer (July-August) and winter (January-March) at a total of 135 homes over a three-year period (2004-2007). For five consecutive days at each home in summer and winter concurrent samples were collected of personal exposures, residential indoor and outdoor concentrations, and at a community monitoring site. The samples were analyzed for PM2.5 (mass and composition), air toxics, O3 and NO2. The second study is on-going and focuses on characterizing the impacts of mobile sources on near-road air quality and exposures among a cohort of asthmatic children. The Near-road EXposures and effects from Urban air pollutants Study (NEXUS) is designed to examine the relationship between near-road exposures to traffic-related air pollutants (BC, CO, NOx and PM components) and respiratory health of asthmatic children who live close to major roadways. The study will investigate the effects of traffic-associated exposures on exaggerated airway responses, biomolecular responses of inflammatory and oxidative stress, and how these exposures affect the frequency and severity of adverse respiratory outcomes. The study will also examine different near-road exposure assessment metrics, including monitoring and modeling techniques. Concentrations of traffic-related air pollutants will be measured and modeled indoors and outdoors of the children’s homes. Measurements will be made in a subset of homes each during fall 2010 and early spring 2011. High-time resolution measurements will be made of the chemical composition of traffic-related pollutants in the gas and particle phases adjacent to selected roadways. These data will be used to quantify the impact of traffic on the observed air quality data. Air pollutant dispersion and exposure models will be used in combination with measured data to estimate indoor/outdoor concentrations and personal exposures. Near-road spatial concentration patterns will be estimated at the children’s residences and schools across the study domain using dispersion modeling. These data will be used as input for an individual-level exposure model to estimate personal exposures from meteorology and questionnaire data on indoor sources, residential characteristics and operation, and time-location-activity patterns.

  16. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  17. Application of time-resolved shadowgraph imaging and computer analysis to study micrometer-scale response of superfluid helium

    NASA Astrophysics Data System (ADS)

    Sajjadi, Seyed; Buelna, Xavier; Eloranta, Jussi

    2018-01-01

    Application of inexpensive light emitting diodes as backlight sources for time-resolved shadowgraph imaging is demonstrated. The two light sources tested are able to produce light pulse sequences in the nanosecond and microsecond time regimes. After determining their time response characteristics, the diodes were applied to study the gas bubble formation around laser-heated copper nanoparticles in superfluid helium at 1.7 K and to determine the local cavitation bubble dynamics around fast moving metal micro-particles in the liquid. A convolutional neural network algorithm for analyzing the shadowgraph images by a computer is presented and the method is validated against the results from manual image analysis. The second application employed the red-green-blue light emitting diode source that produces light pulse sequences of the individual colors such that three separate shadowgraph frames can be recorded onto the color pixels of a charge-coupled device camera. Such an image sequence can be used to determine the moving object geometry, local velocity, and acceleration/deceleration. These data can be used to calculate, for example, the instantaneous Reynolds number for the liquid flow around the particle. Although specifically demonstrated for superfluid helium, the technique can be used to study the dynamic response of any medium that exhibits spatial variations in the index of refraction.

  18. Influence of local parameters on the dispersion of traffic-related pollutants within street canyons

    NASA Astrophysics Data System (ADS)

    Karra, Styliani; Malki-Epshtein, Liora; Martin Hyde Collaboration

    2011-11-01

    Ventilation within urban cities and street canyons and the associated air quality is a problem of increasing interest in the last decades. It is important for to minimise exposure of the population to traffic-related pollutants at street level. The residence time of pollutants within the street canyons depends on the meteorological conditions such as wind speed and direction, geometry layout and local parameters (position of traffic lane within the street). An experimental study was carried out to investigate the influence of traffic lane position on the dispersion of traffic-related pollutants within different street canyons geometries: symmetrical (equal building heights on both sides of the street), non-symmetrical (uniform building heights but lower on one side of the street) and heterogeneous (non-uniform building heights on both sides of the street) under constant meteorological conditions. Laboratory experiments were carried out within a water channel and simultaneous measurements of velocity field and concentration scalar levels within and above the street canyons using PIV and PLIF techniques. Traffic -related emissions were simulated using a line emission source. Two positions were examined for all street geometries: line emission source was placed in the centre of the street canyon; line emission source was placed off the centre of the street. TSI Incorporated.

  19. Self-similar slip distributions on irregular shaped faults

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Murphy, S.

    2018-06-01

    We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.

  20. BSDWormer; an Open Source Implementation of a Poisson Wavelet Multiscale Analysis for Potential Fields

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; Gaede, O.

    2014-12-01

    Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at . We will give an overview of the technique, show code snippets using the codebase, and present visualization results for example datasets (including the Surat basin of Australia, and the Lake Ontario region of North America). We invite you to join us in creating and using the best worming software for potential fields in existence — as both gratis and libre software!

  1. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  2. Continuous, edge localized ion heating during non-solenoidal plasma startup and sustainment in a low aspect ratio tokamak

    NASA Astrophysics Data System (ADS)

    Burke, M. G.; Barr, J. L.; Bongard, M. W.; Fonck, R. J.; Hinson, E. T.; Perry, J. M.; Reusch, J. A.; Schlossberg, D. J.

    2017-07-01

    Plasmas in the Pegasus spherical tokamak are initiated and grown by the non-solenoidal local helicity injection (LHI) current drive technique. The LHI system consists of three adjacent electron current sources that inject multiple helical current filaments that can reconnect with each other. Anomalously high impurity ion temperatures are observed during LHI with T i,OV  ⩽  650 eV, which is in contrast to T i,OV  ⩽  70 eV from Ohmic heating alone. Spatial profiles of T i,OV indicate an edge localized heating source, with T i,OV ~ 650 eV near the outboard major radius of the injectors and dropping to ~150 eV near the plasma magnetic axis. Experiments without a background tokamak plasma indicate the ion heating results from magnetic reconnection between adjacent injected current filaments. In these experiments, the HeII T i perpendicular to the magnetic field is found to scale with the reconnecting field strength, local density, and guide field, while {{T}\\text{i,\\parallel}} experiences little change, in agreement with two-fluid reconnection theory. This ion heating is not expected to significantly impact the LHI plasma performance in Pegasus, as it does not contribute significantly to the electron heating. However, estimates of the power transfer to the bulk ion are quite large, and thus LHI current drive provides an auxiliary ion heating mechanism to the tokamak plasma.

  3. Continuous, edge localized ion heating during non-solenoidal plasma startup and sustainment in a low aspect ratio tokamak

    DOE PAGES

    Burke, Marcus G.; Barr, Jayson L.; Bongard, Michael W.; ...

    2017-05-16

    Plasmas in the Pegasus spherical tokamak are initiated and grown by the non-solenoidal local helicity injection (LHI) current drive technique. The LHI system consists of three adjacent electron current sources that inject multiple helical current filaments that can reconnect with each other. Anomalously high impurity ion temperatures are observed during LHI with T i,OV ≤ 650 eV, which is in contrast to T i,OV ≤ 70 eV from Ohmic heating alone. Spatial profiles of T i,OV indicate an edge localized heating source, with T i,OV ~ 650 eV near the outboard major radius of the injectors and dropping to ~150 eV near the plasma magnetic axis. Experiments without a background tokamak plasma indicate the ion heating results from magnetic reconnection between adjacent injected current filaments. In these experiments, the HeII T i perpendicular to the magnetic field is found to scale with the reconnecting field strength, local density, and guide field, whilemore » $${{T}_{\\text{i},\\parallel}}$$ experiences little change, in agreement with two-fluid reconnection theory. In conclusion, this ion heating is not expected to significantly impact the LHI plasma performance in Pegasus, as it does not contribute significantly to the electron heating. However, estimates of the power transfer to the bulk ion are quite large, and thus LHI current drive provides an auxiliary ion heating mechanism to the tokamak plasma.« less

  4. Hydrodynamic simulation and particle-tracking techniques for identification of source areas to public-water intakes on the St. Clair-Detroit river waterway in the Great Lakes Basin

    USGS Publications Warehouse

    Holtschlag, David J.; Koschik, John A.

    2004-01-01

    Source areas to public water intakes on the St. Clair-Detroit River Waterway were identified by use of hydrodynamic simulation and particle-tracking analyses to help protect public supplies from contaminant spills and discharges. This report describes techniques used to identify these areas and illustrates typical results using selected points on St. Clair River and Lake St. Clair. Parameterization of an existing two-dimensional hydrodynamic model (RMA2) of the St. Clair-Detroit River Waterway was enhanced to improve estimation of local flow velocities. Improvements in simulation accuracy were achieved by computing channel roughness coefficients as a function of flow depth, and determining eddy viscosity coefficients on the basis of velocity data. The enhanced parameterization was combined with refinements in the model mesh near 13 public water intakes on the St. Clair-Detroit River Waterway to improve the resolution of flow velocities while maintaining consistency with flow and water-level data. Scenarios representing a range of likely flow and wind conditions were developed for hydrodynamic simulation. Particle-tracking analyses combined advective movements described by hydrodynamic scenarios with random components associated with sub-grid-scale movement and turbulent mixing to identify source areas to public water intakes.

  5. Mesh-based phase contrast Fourier transform imaging

    NASA Astrophysics Data System (ADS)

    Tahir, Sajjad; Bashir, Sajid; MacDonald, C. A.; Petruccelli, Jonathan C.

    2017-04-01

    Traditional x-ray radiography is limited by low attenuation contrast in materials of low electron density. Phase contrast imaging offers the potential to improve the contrast between such materials, but due to the requirements on the spatial coherence of the x-ray beam, practical implementation of such systems with tabletop (i.e. non-synchrotron) sources has been limited. One phase imaging technique employs multiple fine-pitched gratings. However, the strict manufacturing tolerances and precise alignment requirements have limited the widespread adoption of grating-based techniques. In this work, we have investigated a recently developed technique that utilizes a single grid of much coarser pitch. Our system consisted of a low power 100 μm spot Mo source, a CCD with 22 μm pixel pitch, and either a focused mammography linear grid or a stainless steel woven mesh. Phase is extracted from a single image by windowing and comparing data localized about harmonics of the mesh in the Fourier domain. The effects on the diffraction phase contrast and scattering amplitude images of varying grid types and periods, and of varying the width of the window function used to separate the harmonics were investigated. Using the wire mesh, derivatives of the phase along two orthogonal directions were obtained and combined to form improved phase contrast images.

  6. Infant phantom head circuit board for EEG head phantom and pediatric brain simulation

    NASA Astrophysics Data System (ADS)

    Almohsen, Safa

    The infant's skull differs from an adult skull because of the characteristic features of the human skull during early development. The fontanels and the conductivity of the infant skull influence surface currents, generated by neurons, which underlie electroencephalography (EEG) signals. An electric circuit was built to power a set of simulated neural sources for an infant brain activity simulator. Also, in the simulator, three phantom tissues were created using saline solution plus Agarose gel to mimic the conductivity of each layer in the head [scalp, skull brain]. The conductivity measurement was accomplished by two different techniques: using the four points' measurement technique, and a conductivity meter. Test results showed that the optimized phantom tissues had appropriate conductivities to simulate each tissue layer to fabricate a physical head phantom. In this case, the best results should be achieved by testing the electrical neural circuit with the sample physical model to generate simulated EEG data and use that to solve both the forward and the inverse problems for the purpose of localizing the neural sources in the head phantom.

  7. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks-State of the Art and Challenges.

    PubMed

    Staniec, Kamil; Habrych, Marcin

    2016-07-19

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect-in particular-the operation of chemical sensors implemented in the network as they often require additional heating.

  8. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks—State of the Art and Challenges

    PubMed Central

    Staniec, Kamil; Habrych, Marcin

    2016-01-01

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect—in particular—the operation of chemical sensors implemented in the network as they often require additional heating. PMID:27447633

  9. HPSLPred: An Ensemble Multi-Label Classifier for Human Protein Subcellular Location Prediction with Imbalanced Source.

    PubMed

    Wan, Shixiang; Duan, Yucong; Zou, Quan

    2017-09-01

    Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Adaptive behaviors in multi-agent source localization using passive sensing.

    PubMed

    Shaukat, Mansoor; Chitre, Mandar

    2016-12-01

    In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.

  11. Precipitation Recycling and the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Atlas, Robert (Technical Monitor)

    2002-01-01

    Precipitation recycling is defined as the amount of water that evaporates from a region that precipitates within the same region. This is also interpreted as the local source of water for precipitation. In this study, the local and remote sources of water for precipitation have been diagnosed through the use of passive constituent tracers that represent regional evaporative sources along with their transport and precipitation. We will discuss the differences between this method and the simpler bulk diagnostic approach to precipitation recycling. A summer seasonal simulation has been analyzed for the regional sources of the United States Great Plains precipitation. While the tropical Atlantic Ocean (including the Gulf of Mexico) and the local continental sources of precipitation are most dominant, the vertically integrated column of water contains substantial water content originating from the Northern Pacific Ocean, which is not precipitated. The vertical profiles of regional water sources indicate that local Great Plains source of water dominates the lower troposphere, predominantly in the PBL. However, the Pacific Ocean source is dominant over a large portion of the middle to upper troposphere. The influence of the tropical Atlantic Ocean is reasonably uniform throughout the column. While the results are not unexpected given the formulation of the model's convective parameterization, the analysis provides a quantitative assessment of the impact of local evaporation on the occurrence of convective precipitation in the GCM. Further, these results suggest that local source of water is not well mixed throughout the vertical column.

  12. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  13. Cross-correlation, triangulation, and curved-wavefront focusing of coral reef sound using a bi-linear hydrophone array.

    PubMed

    Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L

    2015-01-01

    A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.

  14. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-09-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  15. An application of the theory of planned behaviour to study the influencing factors of participation in source separation of food waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim Ghani, Wan Azlina Wan Ab., E-mail: wanaz@eng.upm.edu.my; Rusli, Iffah Farizan, E-mail: iffahrusli@yahoo.com; Biak, Dayang Radiah Awang, E-mail: dayang@eng.upm.edu.my

    Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designingmore » campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public’s involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes.« less

  16. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  17. Using an Explicit Emission Tagging Method in Global Modeling of Source-Receptor Relationships for Black Carbon in the Arctic: Variations, Sources and Transport Pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hailong; Rasch, Philip J.; Easter, Richard C.

    2014-11-27

    We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes,more » while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The relative contribution from major non-Arctic sources to the Arctic BC burden increases only slightly, although the contribution of Arctic local sources is reduced by a factor of 2 due to the slow aging treatment.« less

  18. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we introduce the vbICA technique and present its application on synthetic data that simulate a GPS network recording ground deformation in a tectonically active region, with synthetic time-series containing interseismic, coseismic, and postseismic deformation, plus seasonal deformation, and white and coloured noise. We study the ability of the algorithm to recover the original (known) sources of deformation, and then apply it to a real scenario: the Emilia seismic sequence (2012, northern Italy), which is an example of seismic sequence occurred in a slowly converging tectonic setting, characterized by several local to regional anthropogenic or natural sources of deformation, mainly subsidence due to fluid withdrawal and sediments compaction. We apply both PCA and vbICA to displacement time-series recorded by continuous GPS and InSAR (Pezzo et al., EGU2015-8950).

  19. Jurassic Diabase from Leesburg, VA: A Proposed Lunar Simulant

    NASA Technical Reports Server (NTRS)

    Taylor, Patrick T.; Lowman, P. D.; Nagihara, Seiichi; Milam, M. B.; Nakamura, Yosio

    2008-01-01

    A study of future lunar seismology and heat flow is being carried out as part of the NASA Lunar Sortie Science Program. This study will include new lunar drilling techniques, using a regolith simulant, for emplacement of instruments. Previous lunar simulants, such as JSC-1 and MLS-1, were not available when the study began, so a local simulant source was required. Diabase from a quarry at Leeseburg, Virginia, was obtained from the Luck Stone Corporation. We report here initial results of a petrographic examination of this rock, GSC-1 henceforth.

  20. Jurassic Diabase from Leesburg, VA: A Proposed Lunar Simulant

    NASA Technical Reports Server (NTRS)

    Taylor, P. T.; Lowman, P. D.; Nagihara, Seiichi; Milam, M. B.; Nakamura, Yosio

    2008-01-01

    A study of future lunar seismology and heat flow is being carried out as part of the NASA Lunar Sortie Science Program [1].This study will include new lunar drilling techniques, using a regolith simulant, for emplacement of instruments. Previous lunar simulants, such as JSC-I and MLS-l, were not available when the study began, so a local simulant source was required. Diabase from a quarry at Leesburg, Virginia, was obtained from the Luck Stone Corporation. We report here initial results of a petrographic examination of this rock, GSC-1 henceforth.

  1. COMBINED DELAY AND GRAPH EMBEDDING OF EPILEPTIC DISCHARGES IN EEG REVEALS COMPLEX AND RECURRENT NONLINEAR DYNAMICS.

    PubMed

    Erem, B; Hyde, D E; Peters, J M; Duffy, F H; Brooks, D H; Warfield, S K

    2015-04-01

    The dynamical structure of the brain's electrical signals contains valuable information about its physiology. Here we combine techniques for nonlinear dynamical analysis and manifold identification to reveal complex and recurrent dynamics in interictal epileptiform discharges (IEDs). Our results suggest that recurrent IEDs exhibit some consistent dynamics, which may only last briefly, and so individual IED dynamics may need to be considered in order to understand their genesis. This could potentially serve to constrain the dynamics of the inverse source localization problem.

  2. Multiple-Star System Adaptive Vortex Coronagraphy Using a Liquid Crystal Light Valve

    NASA Astrophysics Data System (ADS)

    Aleksanyan, Artur; Kravets, Nina; Brasselet, Etienne

    2017-05-01

    We propose the development of a high-contrast imaging technique enabling the simultaneous and selective nulling of several light sources. This is done by realizing a reconfigurable multiple-vortex phase mask made of a liquid crystal thin film on which local topological features can be addressed electro-optically. The method is illustrated by reporting on a triple-star optical vortex coronagraphy laboratory demonstration, which can be easily extended to higher multiplicity. These results allow considering the direct observation and analysis of worlds with multiple suns and more complex extrasolar planetary systems.

  3. Using multiple isotopes to understand the source of ingredients used in golden beverages

    NASA Astrophysics Data System (ADS)

    Wynn, J. G.

    2011-12-01

    Traditionally, beer contains 4 simple ingredients: water, barley, hops and yeast. Each of these ingredients used in the brewing process contributes some combination of a number of "traditional" stable isotopes (i.e., isotopes of H, C, O, N and S) to the final product. As an educational exercise in an "Analytical Techniques in Geology" course, a group of students analyzed the isotopic composition of the gas, liquid and solid phases of a variety of beer samples collected from throughout the world (including other beverages). The hydrogen and oxygen isotopic composition of the water followed closely the isotopic composition of local meteoric water at the source of the brewery, although there is a systematic offset from the global meteoric water line that may be due to the effects of CO2-H2O equilibration. The carbon isotopic composition of the CO2 reflected that of the solid residue (the source of carbon used as a fermentation substrate), but may potentially be modified by addition of gas-phase CO2 from an inorganic source. The carbon isotopic composition of the solid residue similarly tracks that of the fermentation substrate, and may indicate some alcohol fermented from added sugars in some cases. The nitrogen isotopic composition of the solid residue was relatively constant, and may track the source of nitrogen in the barley, hops and yeast. Each of the analytical methods used is a relatively standard technique used in geological applications, making this a "fun" exercise for those involved, and gives the students hands-on experience with a variety of analytes from a non-traditional sample material.

  4. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L. (Principal Investigator)

    1995-01-01

    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.

  5. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  6. Satellite data based method for general survey of forest insect disturbance in British Columbia

    NASA Astrophysics Data System (ADS)

    Ranson, J.; Montesano, P.

    2008-12-01

    Regional forest disturbances caused by insects are important to monitor and quantify because of their influence on local ecosystems and the global carbon cycle. Local damage to forest trees disrupts food supplies and shelter for a variety of organisms. Changes in the global carbon budget, its sources and its sinks affect the way the earth functions as a whole, and has an impact on global climate. Furthermore, the ability to detect nascent outbreaks and monitor the spread of regional infestations helps managers mitigate the damage done by catastrophic insect outbreaks. While detection is needed at a fine scale to support local mitigation efforts, detection at a broad regional scale is important for carbon flux modeling on the landscape scale, and needed to direct the local efforts. This paper presents a method for routinely detecting insect damage to coniferous forests using MODIS vegetation indices, thermal anomalies and land cover. The technique is validated using insect outbreak maps and accounts for fire disturbance effects. The range of damage detected may be used to interpret and quantify possible forest damage by insects.

  7. Intraseptal anesthesia: a review of a relevant injection technique.

    PubMed

    Woodmansey, Karl

    2005-01-01

    Although overshadowed by intraosseous anesthesia and the periodontal ligament injection, intraseptal anesthesia remains a useful local anesthesia technique for general dentists. Intraseptal anesthesia can be employed with safety and efficacy as an alternative to conventional local infiltration or regional nerve block injections. It also can serve as an adjunctive technique when conventional techniques fail to achieve adequate local anesthesia. This article reviews the intraseptal anesthesia technique, including its indications and limitations.

  8. Fiber Optical Improvements for a Device Used in Laparoscopic Hysterectomy Surgery

    NASA Astrophysics Data System (ADS)

    Hernández Garcia, Ricardo; Vázquez Mercado, Liliana; García-Torales, G.; Flores, Jorge L.; Barcena-Soto, Maximiliano; Casillas Santana, Norberto; Casillas Santana, Juan Manuel

    2006-09-01

    Hysterectomy removes uterus from patients suffering different pathologies. One of the most common techniques for performing it is the laparoscopically-assisted vaginal hysterectomy (LAVH). In the final stage of the procedure, surgeons face the need to unambiguously identify the vaginal cuff before uterus removal. The aim of this research is to adapt a local source of illumination to a polymer cup-like device adapted to a stainless steel shaft that surgeons nowadays use to manipulate the uterus in LAVH. Our proposal consists in implementing a set of optical fiber illuminators along the border of the cup-like device to illuminate the exact vaginal cupola, using an external light source. We present experimental results concerning temperature increases in quasi adiabatic conditions in cow meat under different light intensity illumination.

  9. Fiber Bragg grating based arterial localization device

    NASA Astrophysics Data System (ADS)

    Ho, Siu Chun Michael; Li, Weijie; Razavi, Mehdi; Song, Gangbing

    2017-06-01

    A critical first step to many surgical procedures is locating and gaining access to a patients vascular system. Vascular access allows the deployment of other surgical instruments and also the monitoring of many physiological parameters. Current methods to locate blood vessels are predominantly based on the landmark technique coupled with ultrasound, fluoroscopy, or Doppler. However, even with experience and technological assistance, locating the required blood vessel is not always an easy task, especially with patients that present atypical anatomy or suffer from conditions such as weak pulsation or obesity that make vascular localization difficult. With recent advances in fiber optic sensors, there is an opportunity to develop a new tool that can make vascular localization safer and easier. In this work, the authors present a new fiber Bragg grating (FBG) based vascular access device that specializes in arterial localization. The device estimates the location towards a local artery based on the bending of a needle inserted near the tissue surrounding the artery. Experimental results obtained from an artificial circulatory loop and a mock artery show the device works best for lower angles of needle insertion and can provide an approximately 40° range of estimation towards the location of a pulsating source (e.g. an artery).

  10. Optical fiber characteristics and standards; Proceedings of the Meeting, Cannes, France, November 25-27, 1985

    NASA Technical Reports Server (NTRS)

    Bouillie, Remy (Editor)

    1986-01-01

    Papers are presented on outside vapor deposition, the plasma activated CVD process for large scale production of telecommunication fibers, axial lateral plasma deposition technology from plastic clad silica, coatings for optical fibers, primary coating characterization, and radiation-induced time dependent attenuation in a fiber. Topics discussed include fibers with high tensile strength, the characteristics and specifications of airborne fiber optic components, the baseband frequency response of multimode fibers, and fibers for local and broadband networks. Consideration is given to industrial measurements for single mode and multimode fibers, the characterization of source power distribution in a multimode fiber by a splice offset technique, the measurement of chromatic dispersion in a single mode optical, and the effect of temperature on the refracted near-field optical fiber profiling technique.

  11. Epileptogenic zone localization using magnetoencephalography predicts seizure freedom in epilepsy surgery

    PubMed Central

    Englot, Dario J.; Nagarajan, Srikantan S.; Imber, Brandon S.; Raygor, Kunal P.; Honma, Susanne M.; Mizuiri, Danielle; Mantle, Mary; Knowlton, Robert C.; Kirsch, Heidi E.; Chang, Edward F.

    2015-01-01

    Objective The efficacy of epilepsy surgery depends critically upon successful localization of the epileptogenic zone. Magnetoencephalography (MEG) enables non-invasive detection of interictal spike activity in epilepsy, which can then be localized in three dimensions using magnetic source imaging (MSI) techniques. However, the clinical value of MEG in the pre-surgical epilepsy evaluation is not fully understood, as studies to date are limited by either a lack of long-term seizure outcomes or small sample size. Methods We performed a retrospective cohort study of focal epilepsy patients who received MEG for interictal spike mapping followed by surgical resection at our institution. Results We studied 132 surgical patients, with mean post-operative follow-up of 3.6 years (minimum 1 year). Dipole source modelling was successful in 103 (78%) patients, while no interictal spikes were seen in others. Among patients with successful dipole modelling, MEG findings were concordant with and specific to: i) the region of resection in 66% of patients, ii) invasive electrocorticography (ECoG) findings in 67% of individuals, and iii) the MRI abnormality in 74% of cases. MEG showed discordant lateralization in ~5% of cases. After surgery, 70% of all patients achieved seizure-freedom (Engel class I outcome). Whereas 85% of patients with concordant and specific MEG findings became seizure-free, this outcome was achieved by only 37% of individuals with MEG findings that were non-specific or discordant with the region of resection (χ2 = 26.4, p < 0.001). MEG reliability was comparable in patients with or without localized scalp EEG, and overall, localizing MEG findings predicted seizure freedom with an odds ratio of 5.11 (2.23–11.8, 95% CI). Significance MEG is a valuable tool for non-invasive interictal spike mapping in epilepsy surgery, including patients with non-localized findings on long-term EEG monitoring, and localization of the epileptogenic zone using MEG is associated with improved seizure outcomes. PMID:25921215

  12. Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization.

    PubMed

    Zhang, You; Ren, Lei; Vergalasova, Irina; Yin, Fang-Fang

    2017-01-01

    Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams' eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV-MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.

  13. Large scale meteorological patterns and moisture sources during precipitation extremes over South Asia

    NASA Astrophysics Data System (ADS)

    Mehmood, S.; Ashfaq, M.; Evans, K. J.; Black, R. X.; Hsu, H. H.

    2017-12-01

    Extreme precipitation during summer season has shown an increasing trend across South Asia in recent decades, causing an exponential increase in weather related losses. Here we combine a cluster analyses technique (Agglomerative Hierarchical Clustering) with a Lagrangian based moisture analyses technique to investigate potential commonalities in the characteristics of the large scale meteorological patterns (LSMP) and moisture anomalies associated with the observed extreme precipitation events, and their representation in the Department of Energy model ACME. Using precipitation observations from the Indian Meteorological Department (IMD) and Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation (APHRODITE), and atmospheric variables from Era-Interim Reanalysis, we first identify LSMP both in upper and lower troposphere that are responsible for wide spread precipitation extreme events during 1980-2015 period. For each of the selected extreme event, we perform moisture source analyses to identify major evaporative sources that sustain anomalous moisture supply during the course of the event, with a particular focus on local terrestrial moisture recycling. Further, we perform similar analyses on two sets of five-member ensemble of ACME model (1-degree and ¼ degree) to investigate the ability of ACME model in simulating precipitation extremes associated with each of the LSMP patterns and associated anomalous moisture sourcing from each of the terrestrial and oceanic evaporative region. Comparison of low and high-resolution model configurations provides insight about the influence of horizontal grid spacing in the simulation of extreme precipitation and the governing mechanisms.

  14. Local gate control in carbon nanotube quantum devices

    NASA Astrophysics Data System (ADS)

    Biercuk, Michael Jordan

    This thesis presents transport measurements of carbon nanotube electronic devices operated in the quantum regime. Nanotubes are contacted by source and drain electrodes, and multiple lithographically-patterned electrostatic gates are aligned to each device. Transport measurements of device conductance or current as a function of local gate voltages reveal that local gates couple primarily to the proximal section of the nanotube, hence providing spatially localized control over carrier density along the nanotube length. Further, using several different techniques we are able to produce local depletion regions along the length of a tube. This phenomenon is explored in detail for different contact metals to the nanotube. We utilize local gating techniques to study multiple quantum dots in carbon nanotubes produced both by naturally occurring defects, and by the controlled application of voltages to depletion gates. We study double quantum dots in detail, where transport measurements reveal honeycomb charge stability diagrams. We extract values of energy-level spacings, capacitances, and interaction energies for this system, and demonstrate independent control over all relevant tunneling rates. We report rf-reflectometry measurements of gate-defined carbon nanotube quantum dots with integrated charge sensors. Aluminum rf-SETs are electrostatically coupled to carbon nanotube devices and detect single electron charging phenomena in the Coulomb blockade regime. Simultaneous correlated measurements of single electron charging are made using reflected rf power from the nanotube itself and from the rf-SET on microsecond time scales. We map charge stability diagrams for the nanotube quantum dot via charge sensing, observing Coulomb charging diamonds beyond the first order. Conductance measurements of carbon nanotubes containing gated local depletion regions exhibit plateaus as a function of gate voltage, spaced by approximately 1e2/h, the quantum of conductance for a single (non-degenerate) mode. Plateau structure is investigated as a function of bias voltage, temperature, and magnetic field. We speculate on the origin of this surprising quantization, which appears to lack band and spin degeneracy.

  15. Associating Fast Radio Bursts with Extragalactic Radio Sources: General Methodology and a Search for a Counterpart to FRB 170107

    NASA Astrophysics Data System (ADS)

    Eftekhari, T.; Berger, E.; Williams, P. K. G.; Blanchard, P. K.

    2018-06-01

    The discovery of a repeating fast radio burst (FRB) has led to the first precise localization, an association with a dwarf galaxy, and the identification of a coincident persistent radio source. However, further localizations are required to determine the nature of FRBs, the sources powering them, and the possibility of multiple populations. Here we investigate the use of associated persistent radio sources to establish FRB counterparts, taking into account the localization area and the source flux density. Due to the lower areal number density of radio sources compared to faint optical sources, robust associations can be achieved for less precise localizations as compared to direct optical host galaxy associations. For generally larger localizations that preclude robust associations, the number of candidate hosts can be reduced based on the ratio of radio-to-optical brightness. We find that confident associations with sources having a flux density of ∼0.01–1 mJy, comparable to the luminosity of the persistent source associated with FRB 121102 over the redshift range z ≈ 0.1–1, require FRB localizations of ≲20″. We demonstrate that even in the absence of a robust association, constraints can be placed on the luminosity of an associated radio source as a function of localization and dispersion measure (DM). For DM ≈1000 pc cm‑3, an upper limit comparable to the luminosity of the FRB 121102 persistent source can be placed if the localization is ≲10″. We apply our analysis to the case of the ASKAP FRB 170107, using optical and radio observations of the localization region. We identify two candidate hosts based on a radio-to-optical brightness ratio of ≳100. We find that if one of these is indeed associated with FRB 170107, the resulting radio luminosity (1029‑ 4 × 1030 erg s‑1 Hz‑1, as constrained from the DM value) is comparable to the luminosity of the FRB 121102 persistent source.

  16. QEEG and LORETA in Teenagers With Conduct Disorder and Psychopathic Traits.

    PubMed

    Calzada-Reyes, Ana; Alvarez-Amador, Alfredo; Galán-García, Lídice; Valdés-Sosa, Mitchell

    2017-05-01

    Few studies have investigated the impact of the psychopathic traits on the EEG of teenagers with conduct disorder (CD). To date, there is no other research studying low-resolution brain electromagnetic tomography (LORETA) technique using quantitative EEG (QEEG) analysis in adolescents with CD and psychopathic traits. To find electrophysiological differences specifically related to the psychopathic traits. The current investigation compares the QEEG and the current source density measures between adolescents with CD and psychopathic traits and adolescents with CD without psychopathic traits. The resting EEG activity and LORETA for the EEG fast spectral bands were evaluated in 42 teenagers with CD, 25 with and 17 without psychopathic traits according to the Antisocial Process Screening Device. All adolescents were assessed using the DSM-IV-TR criteria. The EEG visual inspection characteristics and the use of frequency domain quantitative analysis techniques (narrow band spectral parameters) are described. QEEG analysis showed a pattern of beta activity excess on the bilateral frontal-temporal regions and decreases of alpha band power on the left central-temporal and right frontal-central-temporal regions in the psychopathic traits group. Current source density calculated at 17.18 Hz showed an increase within fronto-temporo-striatal regions in the psychopathic relative to the nonpsychopathic traits group. These findings indicate that QEEG analysis and techniques of source localization may reveal differences in brain electrical activity among teenagers with CD and psychopathic traits, which was not obvious to visual inspection. Taken together, these results suggest that abnormalities in a fronto-temporo-striatal network play a relevant role in the neurobiological basis of psychopathic behavior.

  17. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  18. Contrasts between estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2014-01-01

    This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The joint use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  19. Characterization of emission microscopy and liquid crystal thermography in IC fault localization

    NASA Astrophysics Data System (ADS)

    Lau, C. K.; Sim, K. S.

    2013-05-01

    This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.

  20. Near-Field Noise Source Localization in the Presence of Interference

    NASA Astrophysics Data System (ADS)

    Liang, Guolong; Han, Bo

    In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.

  1. Do Local Contributions Affect the Efficacy of Public Primary Schools?

    ERIC Educational Resources Information Center

    Jimenez, Emmanuel; Paqueo, Vicente

    1996-01-01

    Uses cost, financial sources, and student achievement data from Philippine primary schools (financed primarily from central sources) to discover if financial decentralization leads to more efficient schools. Schools that rely more heavily on local sources (contributions from local school boards, municipal government, parent-teacher associations,…

  2. Quantifying air distribution, ventilation effectiveness and airborne pollutant transport in an aircraft cabin mockup

    NASA Astrophysics Data System (ADS)

    Wang, Aijun

    The health, safety and comfort of passengers during flight inspired this research into cabin air quality, which is closely related to its airflow distribution, ventilation effectiveness and airborne pollutant transport. The experimental facility is a full-scale aircraft cabin mockup. A volumetric particle tracking velocimetry (VPTV) technique was enhanced by incorporating a self-developed streak recognition algorithm. Two stable recirculation regions, the reverse flows above the seats and the main air jets from the air supply inlets formed the complicated airflow patterns inside the cabin mockup. The primary air flow was parallel to the passenger rows. The small velocity component in the direction of the cabin depth caused less net air exchange between the passenger rows than that parallel to the passenger rows. Different total air supply rate changed the developing behaviors of the main air jets, leading to different local air distribution patterns. Two indices, Local mean age of air and ventilation effectiveness factor (VEF), were measured at five levels of air supply rate and two levels of heating load. Local mean age of air decreased linearly with an increase in the air supply rate, while the VEF remained consistent when the air supply rate varied. The thermal buoyancy force from the thermal plume generated the upside plume flow, opposite to the main jet flow above the boundary seats and thus lowered the local net air exchange. The airborne transport dynamics depends on the distance between the source and the receptors, the relative location of pollutant source, and air supply rate. Exposure risk was significantly reduced with increased distance between source and receptors. Another possible way to decrease the exposure risk was to position the release source close to the exhaust outlets. Increasing the air supply rate could be an effective solution under some emergency situations. The large volume of data regarding the three-dimensional air velocities was visualized in the CAVE virtual environment. ShadowLight, a virtual reality application was used to import and navigate the velocity vectors through the virtual airspace. A real world demonstration and an active interaction with the three-dimensional air velocity data have been established.

  3. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  4. How to Manual: How to Update and Enhance Your Local Source Water Protection Assessments

    EPA Pesticide Factsheets

    Describes opportunities for improving source water assessments performed under the Safe Drinking Water Act 1453. It includes: local delineations, potential contaminant source inventories, and susceptibility determinations of source water assessment.

  5. Thermal infrared near-field spectroscopy.

    PubMed

    Jones, Andrew C; Raschke, Markus B

    2012-03-14

    Despite the seminal contributions of Kirchhoff and Planck describing far-field thermal emission, fundamentally distinct spectral characteristics of the electromagnetic thermal near-field have been predicted. However, due to their evanescent nature their direct experimental characterization has remained elusive. Combining scattering scanning near-field optical microscopy with Fourier-transform spectroscopy using a heated atomic force microscope tip as both a local thermal source and scattering probe, we spectroscopically characterize the thermal near-field in the mid-infrared. We observe the spectrally distinct and orders of magnitude enhanced resonant spectral near-field energy density associated with vibrational, phonon, and phonon-polariton modes. We describe this behavior and the associated distinct on- and off-resonance nanoscale field localization with model calculations of the near-field electromagnetic local density of states. Our results provide a basis for intrinsic and extrinsic resonant manipulation of optical forces, control of nanoscale radiative heat transfer with optical antennas, and use of this new technique of thermal infrared near-field spectroscopy for broadband chemical nanospectroscopy. © 2012 American Chemical Society

  6. Localization of a small change in a multiple scattering environment without modeling of the actual medium.

    PubMed

    Rakotonarivo, S T; Walker, S C; Kuperman, W A; Roux, P

    2011-12-01

    A method to actively localize a small perturbation in a multiple scattering medium using a collection of remote acoustic sensors is presented. The approach requires only minimal modeling and no knowledge of the scatterer distribution and properties of the scattering medium and the perturbation. The medium is ensonified before and after a perturbation is introduced. The coherent difference between the measured signals then reveals all field components that have interacted with the perturbation. A simple single scatter filter (that ignores the presence of the medium scatterers) is matched to the earliest change of the coherent difference to localize the perturbation. Using a multi-source/receiver laboratory setup in air, the technique has been successfully tested with experimental data at frequencies varying from 30 to 60 kHz (wavelength ranging from 0.5 to 1 cm) for cm-scale scatterers in a scattering medium with a size two to five times bigger than its transport mean free path. © 2011 Acoustical Society of America

  7. Localized conductive patterning via focused electron beam reduction of graphene oxide

    NASA Astrophysics Data System (ADS)

    Kim, Songkil; Kulkarni, Dhaval D.; Henry, Mathias; Zackowski, Paul; Jang, Seung Soon; Tsukruk, Vladimir V.; Fedorov, Andrei G.

    2015-03-01

    We report on a method for "direct-write" conductive patterning via reduction of graphene oxide (GO) sheets using focused electron beam induced deposition (FEBID) of carbon. FEBID treatment of the intrinsically dielectric graphene oxide between two metal terminals opens up the conduction channel, thus enabling a unique capability for nanoscale conductive domain patterning in GO. An increase in FEBID electron dose results in a significant increase of the domain electrical conductivity with improving linearity of drain-source current vs. voltage dependence, indicative of a change of graphene oxide electronic properties from insulating to semiconducting. Density functional theory calculations suggest a possible mechanism underlying this experimentally observed phenomenon, as localized reduction of graphene oxide layers via interactions with highly reactive intermediates of electron-beam-assisted dissociation of surface-adsorbed hydrocarbon molecules. These findings establish an unusual route for using FEBID as nanoscale lithography and patterning technique for engineering carbon-based nanomaterials and devices with locally tailored electronic properties.

  8. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  9. Local public health agency funding: money begets money.

    PubMed

    Bernet, Patrick Michael

    2007-01-01

    Local public health agencies are funded federal, state, and local revenue sources. There is a common belief that increases from one source will be offset by decreases in others, as when a local agency might decide it must increase taxes in response to lowered federal or state funding. This study tests this belief through a cross-sectional study using data from Missouri local public health agencies, and finds, instead, that money begets money. Local agencies that receive more from federal and state sources also raise more at the local level. Given the particular effectiveness of local funding in improving agency performance, these findings that nonlocal revenues are amplified at the local level, help make the case for higher public health funding from federal and state levels.

  10. Local Modelling of Groundwater Flow Using Analytic Element Method Three-dimensional Transient Unconfined Groundwater Flow With Partially Penetrating Wells and Ellipsoidal Inhomogeneites

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Barnes, R. J.; Soule, R.

    2001-12-01

    The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.

  11. Constraining the Mass of the Local Group through Proper Motion Measurements of Local Group Galaxies

    NASA Astrophysics Data System (ADS)

    Sohn, S. Tony; van der Marel, R.; Anderson, J.

    2012-01-01

    The Local Group and its two dominant spiral galaxies have been the benchmark for testing many aspects of cosmological and galaxy formation theories. This includes, e.g., dark halo profiles and shapes, substructure and the "missing satellite" problem, and the minimum mass for galaxy formation. But despite the extensive work in all of these areas, our knowledge of the mass of the Milky Way and M31, and thus the total mass of the Local Group remains one of the most poorly established astronomical parameters (uncertain by a factor of 4). One important reason for this problem is the lack of information in tangential motions of galaxies, which can be only obtained through proper motion measurements. In this study, we introduce our projects for measuring absolute proper motions of (1) the dwarf spheroidal galaxy Leo I, (2) M31, and (3) the 4 dwarf galaxies near the edge of the Local Group (Cetus, Leo A, Tucana, and Sag DIG). Results from these three independent measurements will provide important clues to the mass of the Milky Way, M31, and the Local Group as a whole, respectively. We also present our proper motion measurement technique that uses compact background galaxies as astrometric reference sources.

  12. Localization Versus Abstraction: A Comparison of Two Search Reduction Techniques

    NASA Technical Reports Server (NTRS)

    Lansky, Amy L.

    1992-01-01

    There has been much recent work on the use of abstraction to improve planning behavior and cost. Another technique for dealing with the inherently explosive cost of planning is localization. This paper compares the relative strengths of localization and abstraction in reducing planning search cost. In particular, localization is shown to subsume abstraction. Localization techniques can model the various methods of abstraction that have been used, but also provide a much more flexible framework, with a broader range of benefits.

  13. Bridging the Information Gap: Remote Sensing and Micro Hydropower Feasibility in Data-Scarce Regions

    NASA Astrophysics Data System (ADS)

    Muller, Marc Francois

    Access to electricity remains an impediment to development in many parts of the world, particularly in rural areas with low population densities and prohibitive grid extension costs. In that context, community-scale run-of-river hydropower---micro-hydropower---is an attractive local power generation option, particularly in mountainous regions, where appropriate slope and runoff conditions occur. Despite their promise, micro hydropower programs have generally failed to have a significant impact on rural electrification in developing nations. In Nepal, despite very favorable conditions and approximately 50 years of experience, the technology supplies only 4% of the 10 million households that do not have access to the central electricity grid. These poor results point towards a major information gap between technical experts, who may lack the incentives or local knowledge needed to design appropriate systems for rural villages, and local users, who have excellent knowledge of the community but lack technical expertise to design and manage infrastructure. Both groups suffer from a limited basis for evidence-based decision making due to sparse environmental data available to support the technical components of infrastructure design. This dissertation draws on recent advances in remote sensing data, stochastic modeling techniques and open source platforms to bridge that information gap. Streamflow is a key environmental driver of hydropower production that is particularly challenging to model due to its stochastic nature and the complexity of the underlying natural processes. The first part of the dissertation addresses the general challenge of Predicting streamflow in Ungauged Basins (PUB). It first develops an algorithm to optimize the use of rain gauge observations to improve the accuracy of remote sensing precipitation measures. It then derives and validates a process-based model to estimate streamflow distribution in seasonally dry climates using the stochastic nature of rainfall, and proposes a novel geostatistical method to regionalize its parameters across the stream network. Although motivated by the needs of micro hydropower design in Nepal, these techniques represent contributions to the broader international challenge of PUB and can be applied worldwide. The economic drivers of rural electrification are then considered by presenting an econometric technique to estimate the cost function and demand curve of micro hydropower in Nepal. The empirical strategy uses topography-based instrumental variables to identify price elasticities. All developed methods are assembled in a computer tool, along with a search algorithm that uses a digital elevation model to optimize the placement of micro hydropower infrastructure. The tool---Micro Hydro [em]Power---is an open source application that can be accessed and operated on a web-browser (http://mfmul.shinyapps.io/mhpower). Its purpose is to assist local communities in the design and evaluation of micro hydropower alternatives in their locality, while using cost and demand information provided by local users to generate accurate feasibility maps at the national level, thus bridging the information gap.

  14. Local residue coupling strategies by neural network for InSAR phase unwrapping

    NASA Astrophysics Data System (ADS)

    Refice, Alberto; Satalino, Giuseppe; Chiaradia, Maria T.

    1997-12-01

    Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.

  15. Distributed Spectral Monitoring For Emitter Localization

    DTIC Science & Technology

    2018-02-12

    localization techniques in a DSA sensor network. The results of the research are presented through simulation of localization algorithms, emulation of a...network on a wireless RF environment emulator, and field tests. The results of the various tests in both the lab and field are obtained and analyzed to... are two main classes of localization techniques, and the technique to use will depend on the information available with the emitter. The first class

  16. Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-11-01

    The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.

  17. Magnetotellurics with long distance remote reference to reject DC railway noise

    NASA Astrophysics Data System (ADS)

    Hanstein, T.; Jiang, J.; Strack, K.; Ritter, O.

    2014-12-01

    Some parts of railway network in Europe is electrified by DC current. The return current in the ground is varying in space, time and power when the train is moving. Since the train traffic is active 24 hours, there is no quite time. The train signal is dominating for periods longer than 1 s and is a near field source. The transfer function of the magnetotelluric sounding (MT) is influenced by this near field source, the phase is going to zero and amplitude increase with slope 1 for longer periods. Since this dominating noise is present all day robust magnetotelluric processing technique to identify and remove outliers are not applicable and sufficient. The remote reference technique has successfully been applied for magnetotelluric soundings Combining an disturbed local MT data set with the data of the remote station, which is recording simultaneously the horizontal magnetic fields, can improve the data quality. Finding a good remote station during field survey is difficult and expensive. There is a permanent MT remote reference station in Germany. The set up and maintance is done by the GFZ - Helmholtz Centre Potsdam - GFZ German Research Centre for Geosciences. The location is near Wittstock and has good signal-to-noise-ratio with low cutural noise, the ground is almost lD and recording since May 2010. The electric and magnetic field is continously recorded with 250 Hz sampling and induction coils. The magnetic field is also recorded with fluxgate magnetometers and 5 Hz sampling. The distance to the local MT site is about 600 km.

  18. Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions

    NASA Astrophysics Data System (ADS)

    Gutiérrez, David; Nehorai, Arye; Preissl, Hubert

    2005-05-01

    Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.

  19. Capillary jets in normal gravity: Asymptotic stability analysis and excitation using Maxwell and ultrasonic radiation stresses

    NASA Astrophysics Data System (ADS)

    Lonzaga, Joel Barci

    Both modulated ultrasonic radiation pressure and oscillating Maxwell stress from a voltage-modulated ring electrode are employed to excite low-frequency capillary modes of a weakly tapered liquid jet issuing from a nozzle. The capillary modes are waves formed at the surface of the liquid jet. The ultrasound is internally applied to the liquid jet waveguide and is cut off at a location resulting in a significantly enhanced oscillating radiation stress near the cutoff location. Alternatively, the thin electrode can generate a highly localized oscillating Maxwell stress on the jet surface. Experimental evidence shows that a spatially unstable mode with positive group velocity (propagating downstream from the excitation source) and a neutral mode with negative group velocity are both excited. Reflection at the nozzle boundary converts the neutral mode into an unstable one that interferes with the original unstable mode. The interference effect is observed downstream from the source using a laser-based optical extinction technique that detects the surface waves while the modulation frequency is scanned. This technique is very sensitive to small-amplitude disturbances. Existing linear, convective stability analyses on liquid jets accounting for the gravitational effect (i.e. varying radius and velocity) appear to be not applicable to non-slender, slow liquid jets considered here where the gravitational effect is found substantial at low flow rates. The multiple-scales method, asymptotic expansion and WKB approximation are used to derive a dispersion relation for the capillary wave similar to one obtained by Rayleigh but accounting for the gravitational effect. These mathematical tools aided by Langer's transformation are also used to derive a uniformly valid approximation for the acoustic wave propagation in a tapered cylindrical waveguide. The acoustic analytical approximation is validated by finite-element calculations. The jet response is modeled using a hybrid of Fourier analysis and the WKB-type analysis as proposed by Lighthill. The former derives the mode response to a highly localized source while the latter governs the mode propagation in a weakly inhomogeneous jet away from the source.

  20. Factors controlling groundwater quality in the Yeonjegu District of Busan City, Korea, using the hydrogeochemical processes and fuzzy GIS.

    PubMed

    Venkatramanan, Senapathi; Chung, Sang Yong; Selvam, Sekar; Lee, Seung Yeop; Elzain, Hussam Eldin

    2017-10-01

    The hydrogeochemical processes and fuzzy GIS techniques were used to evaluate the groundwater quality in the Yeonjegu district of Busan Metropolitan City, Korea. The highest concentrations of major ions were mainly related to the local geology. The seawater intrusion into the river water and municipal contaminants were secondary contamination sources of groundwater in the study area. Factor analysis represented the contamination sources of the mineral dissolution of the host rocks and domestic influences. The Gibbs plot exhibited that the major ions were derived from the rock weathering condition. Piper's trilinear diagram showed that the groundwater quality was classified into five types of CaHCO 3 , NaHCO 3 , NaCl, CaCl 2 , and CaSO 4 types in that order. The ionic relationship and the saturation mineral index of the ions indicated that the evaporation, dissolution, and precipitation processes controlled the groundwater chemistry. The fuzzy GIS map showed that highly contaminated groundwater occurred in the northeastern and the central parts and that the groundwater of medium quality appeared in most parts of the study area. It suggested that the groundwater quality of the study area was influenced by local geology, seawater intrusion, and municipal contaminants. This research clearly demonstrated that the geochemical analyses and fuzzy GIS method were very useful to identify the contaminant sources and the location of good groundwater quality.

  1. 47 CFR 11.18 - EAS Designations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...

  2. 47 CFR 11.18 - EAS Designations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...

  3. Characterising the Dense Molecular Gas in Exceptional Local Galaxies

    NASA Astrophysics Data System (ADS)

    Tunnard, Richard C. A.

    2016-08-01

    The interferometric facilities now coming online (the Atacama Large Millimetre Array (ALMA) and the NOrthern Extended Millimeter Array (NOEMA)) and those planned for the coming decade (the Next Generation Very Large Array (ngVLA) and the Square Kilometre Array (SKA)) in the radio to sub-millimetre regimes are opening a window to the molecular gas in high-redshift galaxies. However, our understanding of similar galaxies in the local universe is still far from complete and the data analysis techniques and tools needed to interpret the observations in consistent and comparable ways are yet to be developed. I first describe the Monte Carlo Markov Chain (MCMC) script developed to empower a public radiative transfer code. I characterise both the public code and MCMC script, including an exploration of the effect of observing molecular lines at high redshift where the Cosmic Microwave Background (CMB) can provide a significant background, as well as the effect this can have on well-known local correlations. I present two studies of ultraluminous infrared galaxies (ULIRGs) in the local universe making use of literature and collaborator data. In the first of these, NGC6240, I use the wealth of available data and the geometry of the source to develop a multi-phase, multi-species model, finding evidence for a complex medium of hot diffuse and cold dense gas in pressure equilibrium. Next, I study the prototypical ULIRG Arp 220; an extraordinary galaxy rendered especially interesting by the controversy over the power source of the western of the two merger nuclei and its immense luminosity and dust obscuration. Using traditional grid based methods I explore the molecular gas conditions within the nuclei and find evidence for chemical differentiation between the two nuclei, potentially related to the obscured power source. Finally, I investigate the potential evolution of proto-clusters over cosmic time with sub-millimetre observations of 14 radio galaxies, unexpectedly finding little to no evidence for cluster evolution.

  4. An impact source localization technique for a nuclear power plant by using sensors of different types.

    PubMed

    Choi, Young-Chul; Park, Jin-Ho; Choi, Kyoung-Sik

    2011-01-01

    In a nuclear power plant, a loose part monitoring system (LPMS) provides information on the location and the mass of a loosened or detached metal impacted onto the inner surface of the primary pressure boundary. Typically, accelerometers are mounted on the surface of a reactor vessel to localize the impact location caused by the impact of metallic substances on the reactor system. However, in some cases, the number of accelerometers is not sufficient to estimate the impact location precisely. In such a case, one of useful methods is to utilize other types of sensor that can measure the vibration of the reactor structure. For example, acoustic emission (AE) sensors are installed on the reactor structure to detect leakage or cracks on the primary pressure boundary. However, accelerometers and AE sensors have a different frequency range. The frequency of interest of AE sensors is higher than that of accelerometers. In this paper, we propose a method of impact source localization by using both accelerometer signals and AE signals, simultaneously. The main concept of impact location estimation is based on the arrival time difference of the impact stress wave between different sensor locations. However, it is difficult to find the arrival time difference between sensors, because the primary frequency ranges of accelerometers and AE sensors are different. To overcome the problem, we used phase delays of an envelope of impact signals. This is because the impact signals from the accelerometer and the AE sensor are similar in the whole shape (envelope). To verify the proposed method, we have performed experiments for a reactor mock-up model and a real nuclear power plant. The experimental results demonstrate that we can enhance the reliability and precision of the impact source localization. Therefore, if the proposed method is applied to a nuclear power plant, we can obtain the effect of additional installed sensors. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  5. Locating sources within a dense sensor array using graph clustering

    NASA Astrophysics Data System (ADS)

    Gerstoft, P.; Riahi, N.

    2017-12-01

    We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.

  6. On beam shaping of the field radiated by a line source coupled to finite or infinite photonic crystals.

    PubMed

    Ceccuzzi, Silvio; Jandieri, Vakhtang; Baccarelli, Paolo; Ponti, Cristina; Schettini, Giuseppe

    2016-04-01

    Comparison of the beam-shaping effect of a field radiated by a line source, when an ideal infinite structure constituted by two photonic crystals and an actual finite one are considered, has been carried out by means of two different methods. The lattice sums technique combined with the generalized reflection matrix method is used to rigorously investigate the radiation from the infinite photonic crystals, whereas radiation from crystals composed of a finite number of rods along the layers is analyzed using the cylindrical-wave approach. A directive radiation is observed with the line source embedded in the structure. With an increased separation distance between the crystals, a significant edge diffraction appears that provides the main radiation mechanism in the finite layout. Suitable absorbers are implemented to reduce the above-mentioned diffraction and the reflections at the boundaries, thus obtaining good agreement between radiation patterns of a localized line source coupled to finite and infinite photonic crystals, when the number of periods of the finite structure is properly chosen.

  7. Infrasound Observations from Lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R. O.; Johnson, J. B.; Edens, H. E.; Thomas, R. J.; Jones, K. R.

    2008-12-01

    To provide additional insight into the nature of lightning, we have investigated its infrasound manifestations. An array of three stations in a triangular configuration, with three sensors each, was deployed during the Summer of 2008 (July 24 to July 28) in the Magdalena mountains of New Mexico, to monitor infrasound (below 20 Hz) sources due to lightning. Hyperbolic formulations of time of arrival (TOA) measurements and interferometric techniques were used to locate lightning sources occurring over and outside the network. A comparative analysis of simultaneous Lightning Mapping Array (LMA) data and infrasound measurements operating in the same area was made. The LMA locates the sources of impulsive RF radiation produced by lightning flashes in three spatial dimensions and time, operating in the 60 - 66 MHz television band. The comparison showed strong evidence that lightning does produce infrasound. This work is a continuation of the study of the frequency spectrum of thunder conducted by Holmes et al., who reported measurements of infrasound frequencies. The integration of infrasound measurements with RF source localization by the LMA shows great potential for improved understanding of lightning processes.

  8. A manual to identify sources of fluvial sediment

    USGS Publications Warehouse

    Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph

    2016-01-01

    Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential sediment sources early in the design of the sediment budget will help later in deciding which tools are necessary to monitor erosion and/or deposition at these sources. Tools can range from rapid inventories to estimate the sediment budget or quantifying sediment erosion, deposition, and export through more rigorous field monitoring. In either approach, data are gathered and erosion and deposition calculations are determined and compared to the sediment export with a description of the error uncertainty. Findings are presented to local stakeholders and management officials.Sediment fingerprinting is a technique that apportions the sources of fine-grained sediment in a watershed using tracers or fingerprints. Due to different geologic and anthropogenic histories, the chemical and physical properties of sediment in a watershed may vary and often represent a unique signature (or fingerprint) for each source within the watershed. Fluvial sediment samples (the target sediment) are also collected and exhibit a composite of the source properties that can be apportioned through various statistical techniques. Using an unmixing-model and error analysis, the final apportioned sediment is determined.

  9. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  10. Development and testing of real-time PCR assays for determining fecal loading and source identification (cattle, human, etc.) in surface water and groundwater

    NASA Astrophysics Data System (ADS)

    McKay, L. D.; Layton, A.; Gentry, R.

    2004-12-01

    A multi-disciplinary group of researchers at the University of Tennessee is developing and testing a series of microbial assay methods based on real-time PCR to detect fecal bacterial concentrations and host sources in water samples. Real-time PCR is an enumeration technique based on the unique and conserved nucleic acid sequences present in all organisms. The first research task was development of an assay (AllBac) to detect total amount of Bacteroides, which represents up to 30 percent of fecal mass. Subsequent assays were developed to detect Bacteroides from cattle (BoBac) and humans (HuBac) using 16sRNA genes based on DNA sequences in the national GenBank, as well as sequences from local fecal samples. The assays potentially have significant advantages over conventional bacterial source tracking methods because: 1. unlike traditional enumeration methods, they do not require bacterial cultivation; 2. there are no known non-fecal sources of Bacteroides; 3. the assays are quantitative with results for total concentration and for each species expressed in mg/l; and 4. they show little regional variation within host species, meaning that they do not require development of extensive local gene libraries. The AllBac and BoBac assays have been used in a study of fecal contamination in a small rural watershed (Stock Creek) near Knoxville, TN, and have proven useful in identification of areas where cattle represent a significant fecal input and in development of BMPs. It is expected that these types of assays (and future assays for birds, hogs, etc.) could have broad applications in monitoring fecal impacts from Animal Feeding Operations, as well as from wildlife and human sources.

  11. Methane baseline concentrations and sources in shallow aquifers from the shale gas-prone region of the St. Lawrence lowlands (Quebec, Canada).

    PubMed

    Moritz, Anja; Hélie, Jean-Francois; Pinti, Daniele L; Larocque, Marie; Barnetche, Diogo; Retailleau, Sophie; Lefebvre, René; Gélinas, Yves

    2015-04-07

    Hydraulic fracturing is becoming an important technique worldwide to recover hydrocarbons from unconventional sources such as shale gas. In Quebec (Canada), the Utica Shale has been identified as having unconventional gas production potential. However, there has been a moratorium on shale gas exploration since 2010. The work reported here was aimed at defining baseline concentrations of methane in shallow aquifers of the St. Lawrence Lowlands and its sources using δ(13)C methane signatures. Since this study was performed prior to large-scale fracturing activities, it provides background data prior to the eventual exploitation of shale gas through hydraulic fracturing. Groundwater was sampled from private (n = 81), municipal (n = 34), and observation (n = 15) wells between August 2012 and May 2013. Methane was detected in 80% of the wells with an average concentration of 3.8 ± 8.8 mg/L, and a range of <0.0006 to 45.9 mg/L. Methane concentrations were linked to groundwater chemistry and distance to the major faults in the studied area. The methane δ(1)(3)C signature of 19 samples was > -50‰, indicating a potential thermogenic source. Localized areas of high methane concentrations from predominantly biogenic sources were found throughout the study area. In several samples, mixing, migration, and oxidation processes likely affected the chemical and isotopic composition of the gases, making it difficult to pinpoint their origin. Energy companies should respect a safe distance from major natural faults in the bedrock when planning the localization of hydraulic fracturation activities to minimize the risk of contaminating the surrounding groundwater since natural faults are likely to be a preferential migration pathway for methane.

  12. Source apportionment and distribution of polycyclic aromatic hydrocarbons, risk considerations, and management implications for urban stormwater pond sediments in Minnesota, USA.

    PubMed

    Crane, Judy L

    2014-02-01

    High concentrations of polycyclic aromatic hydrocarbons (PAHs) are accumulating in many urban stormwater ponds in Minnesota, resulting in either expensive disposal of the excavated sediment or deferred maintenance by economically challenged municipalities. Fifteen stormwater ponds in the Minneapolis-St. Paul, MN, metropolitan area were studied to determine sources of PAHs to bed sediments through the application of several environmental forensic techniques, including a contaminant mass balance receptor model. The model results were quite robust and indicated that coal tar-based sealant (CT-sealant) particulate washoff and dust sources were the most important sources of PAHs (67.1%), followed by vehicle-related sources (29.5%), and pine wood combustion particles (3.4%). The distribution of 34 parent and alkylated PAHs was also evaluated regarding ancillary measurements of black carbon, total organic carbon, and particle size classes. None of these parameters were significantly different based on major land-use classifications (i.e., residential, commercial, and industrial) for pond watersheds. PAH contamination in three stormwater ponds was high enough to present a risk to benthic invertebrates, whereas nine ponds exceeded human health risk-based benchmarks that would prompt more expensive disposal of dredged sediment. The State of Minnesota has been addressing the broader issue of PAH-contaminated stormwater ponds by encouraging local municipalities to ban CT-sealants (29 in all) and to promote pollution prevention alternatives to businesses and homeowners, such as switching to asphalt-based sealants. A statewide CT-sealant ban was recently enacted. Other local and regional jurisdictions may benefit from using Minnesota's approach where CT-sealants are still used.

  13. Determining the torus covering factors for a sample of type 1 AGN in the local Universe

    NASA Astrophysics Data System (ADS)

    Ezhikode, Savithri H.; Gandhi, Poshak; Done, Chris; Ward, Martin; Dewangan, Gulab C.; Misra, Ranjeev; Philip, Ninan Sajeeth

    2017-12-01

    In the unified scheme of active galactic nuclei, a dusty torus absorbs and then reprocesses a fraction of the intrinsic luminosity which is emitted at longer wavelengths. Thus, subject to radiative transfer corrections, the fraction of the sky covered by the torus as seen from the central source (known as the covering factor fc) can be estimated from the ratio of the infrared to the bolometric luminosities of the source as fc = Ltorus/LBol. However, the uncertainty in determining LBol has made the estimation of covering factors by this technique difficult, especially for AGN in the local Universe where the peak of the observed spectral energy distributions lies in the UV (ultraviolet). Here, we determine the covering factors of an X-ray/optically selected sample of 51 type 1 AGN. The bolometric luminosities of these sources are derived using a self-consistent, energy-conserving model that estimates the contribution in the unobservable far-UV region, using multifrequency data obtained from SDSS, XMM-Newton, WISE, 2MASS and UKIDSS. We derive a mean value of fc ∼ 0.30 with a dispersion of 0.17. Sample correlations, combined with simulations, show that fc is more strongly anticorrelated with λEdd than with LBol. This points to large-scale torus geometry changes associated with the Eddington-dependent accretion flow, rather than a receding torus, with its inner sublimation radius determined solely by heating from the central source. Furthermore, we do not see any significant change in the distribution of fc for sub-samples of radio-loud sources or Narrow Line Seyfert 1 galaxies (NLS1s), though these sub-samples are small.

  14. Research to Support California Greenhouse Gas Reduction Programs

    NASA Astrophysics Data System (ADS)

    Croes, B. E.; Charrier-Klobas, J. G.; Chen, Y.; Duren, R. M.; Falk, M.; Franco, G.; Gallagher, G.; Huang, A.; Kuwayama, T.; Motallebi, N.; Vijayan, A.; Whetstone, J. R.

    2016-12-01

    Since the passage of the California Global Warming Solutions Act in 2006, California state agencies have developed comprehensive programs to reduce both long-lived and short-lived climate pollutants. California is already close to achieving its goal of reducing greenhouse (GHG) emissions to 1990 levels by 2020, about a 30% reduction from business as usual. In addition, California has developed strategies to reduce GHG emissions another 40% by 2030, which will put the State on a path to meeting its 2050 goal of an 80% reduction. To support these emission reduction goals, the California Air Resources Board (CARB) and the California Energy Commission have partnered with NASA's Carbon Monitoring System (CMS) program on a comprehensive research program to identify and quantify the various GHG emission source sectors in the state. These include California-specific emission studies and inventories for carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) emission sources; a Statewide GHG Monitoring Network for these pollutants integrated with the Los Angeles Megacities Carbon Project funded by several federal agencies; efforts to verify emission inventories using inversion modeling and other techniques; mobile measurement platforms and flux chambers to measure local and source-specific emissions; and a large-scale statewide methane survey using a tiered monitoring and measurement program, which will include satellite, airborne, and ground-level measurements of the various regions and source sectors in the State. In addition, there are parallel activities focused on black carbon (BC) and fluorinated gases (F-gases) by CARB. This presentation will provide an overview of results from inventory, monitoring, data analysis, and other research efforts on Statewide, regional, and local sources of GHG emissions in California.

  15. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  16. The Role of Functional Neuroimaging in Pre-Surgical Epilepsy Evaluation

    PubMed Central

    Pittau, Francesca; Grouiller, Frédéric; Spinelli, Laurent; Seeck, Margitta; Michel, Christoph M.; Vulliemoz, Serge

    2014-01-01

    The prevalence of epilepsy is about 1% and one-third of cases do not respond to medical treatment. In an eligible subset of patients with drug-resistant epilepsy, surgical resection of the epileptogenic zone is the only treatment that can possibly cure the disease. Non-invasive techniques provide information for the localization of the epileptic focus in the majority of cases, whereas in others invasive procedures are required. In the last years, non-invasive neuroimaging techniques, such as simultaneous recording of functional magnetic resonance imaging and electroencephalogram (EEG-fMRI), positron emission tomography (PET), single photon emission computed tomography (SPECT), electric and magnetic source imaging (MSI, ESI), spectroscopy (MRS), have proved their usefulness in defining the epileptic focus. The combination of these functional techniques can yield complementary information and their concordance is crucial for guiding clinical decision, namely the planning of invasive EEG recordings or respective surgery. The aim of this review is to present these non-invasive neuroimaging techniques, their potential combination, and their role in the pre-surgical evaluation of patients with pharmaco-resistant epilepsy. PMID:24715886

  17. Rain Volume Estimation over Areas Using Satellite and Radar Data

    NASA Technical Reports Server (NTRS)

    Doneaud, A. A.; Miller, J. R., Jr.; Johnson, L. R.; Vonderhaar, T. H.; Laybe, P.

    1984-01-01

    The application of satellite data to a recently developed radar technique used to estimate convective rain volumes over areas on a dry environment (the northern Great Plains) is discussed. The area time integral technique (ATI) provides a means of estimating total rain volumes over fixed and floating target areas of the order of 1,000 to 100,000 km(2) for clusters lasting 40 min. The basis of the method is the existence of a strong correlation between the area coverage integrated over the lifetime of the storm (ATI) and the rain volume. One key element in this technique is that it does not require the consideration of the structure of the radar intensities inside the area coverage to generate rain volumes, but only considers the rain event per se. This fact might reduce or eliminate some sources of error in applying the technique to satellite data. The second key element is that the ATI once determined can be converted to total rain volume by using a constant factor (average rain rate) for a given locale.

  18. Quantitative Aspects of Single Molecule Microscopy

    PubMed Central

    Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally

    2015-01-01

    Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102

  19. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.

    PubMed

    Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G

    2018-01-01

    Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

  20. Application of single-shot spiral scanning for volume localization.

    PubMed

    Ra, J B; Rim, C Y; Cho, Z H

    1991-02-01

    A new technique using a spiral scan single-shot RF pulse for localized volume selection has been developed and its experimental results are presented. This technique employs an additional radial-gradient coil in conjunction with the oscillating gradients for the spiral scan to localize the 3D volume. The short selection time in this technique minimizes both signal contamination from unwanted regions and signal attenuation due to T2 decay. We provide both the theoretical background of the technique and the experimental results obtained from a phantom as well as a human volunteer. The proposed method appears simple and accurate in localizing a volume which would be used as either fast imaging or localized spectroscopy.

  1. Time-domain reflectance diffuse optical tomography with Mellin-Laplace transform for experimental detection and depth localization of a single absorbing inclusion

    PubMed Central

    Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc

    2013-01-01

    We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292

  2. Carbonyl compounds in the lower marine troposphere over the Caribbean Sea and Bahamas

    NASA Astrophysics Data System (ADS)

    Zhou, Xianliang; Mopper, Kenneth

    1993-02-01

    A highly sensitive carbonyl trapping technique based on special 2,4-dinitrophenylhydrazine reagent purification and cartridge preparation procedures was used on a cruise to the Orinoco estuary and the Caribbean Sea in order to determine the nature, concentration, and diurnal variation of low molecular weight carbonyl compounds in the lower marine boundary layer. The results suggest that the main source of formaldehyde and acetaldehyde in the lower marine boundary layer in the studied region is photooxidation of locally derived organic matter such as nonmethane hydrocarbons and long-chained lipids. Samples that were influenced by local land masses showed significantly higher concentrations of all carbonyl compounds. The main loss pathway appears to be dilution in the atmosphere as a result of vertical convective mixing, probably followed by photolysis in the upper marine boundary layer and free troposphere.

  3. Complexity Induced Anisotropic Bimodal Intermittent Turbulence in Space Plasmas

    NASA Technical Reports Server (NTRS)

    Chang, Tom; Tam, Sunny W. Y.; Wu, Cheng-Chin

    2004-01-01

    The "physics of complexity" in space plasmas is the central theme of this exposition. It is demonstrated that the sporadic and localized interactions of magnetic coherent structures arising from the plasma resonances can be the source for the coexistence of nonpropagating spatiotemporal fluctuations and propagating modes. Non-Gaussian probability distribution functions of the intermittent fluctuations from direct numerical simulations are obtained and discussed. Power spectra and local intermittency measures using the wavelet analyses are presented to display the spottiness of the small-scale turbulent fluctuations and the non-uniformity of coarse-grained dissipation that can lead to magnetic topological reconfigurations. The technique of the dynamic renormalization group is applied to the study of the scaling properties of such type of multiscale fluctuations. Charged particle interactions with both the propagating and nonpropagating portions of the intermittent turbulence are also described.

  4. Contrasts between chemical and physical estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2013-05-01

    This study compares geochemical and physical methods of estimating baseflow in the upper reaches of the Barwon River, southeast Australia. Estimates of baseflow from physical techniques such as local minima and recursive digital filters are higher than those based on chemical mass balance using continuous electrical conductivity (EC). Between 2001 and 2011 the baseflow flux calculated using chemical mass balance is between 1.8 × 103 and 1.5 × 104 ML yr-1 (15 to 25% of the total discharge in any one year) whereas recursive digital filters yield baseflow fluxes of 3.6 × 103 to 3.8 × 104 ML yr-1 (19 to 52% of discharge) and the local minimum method yields baseflow fluxes of 3.2 × 103 to 2.5 × 104 ML yr-1 (13 to 44% of discharge). These differences most probably reflect how the different techniques characterise baseflow. Physical methods probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow or floodplain storage) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The mismatch between geochemical and physical estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months. Consistent with these interpretations, modelling of bank storage indicates that bank return flows provide water to the river for several weeks after flood events. EC vs. discharge variations during individual flow events also imply that an inflow of low EC water stored within the banks or on the floodplain occurs as discharge falls. The joint use of physical and geochemical techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  5. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  6. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Characterization of seismicity at Volcan Baru, Panama: May 2013 through April 2014

    NASA Astrophysics Data System (ADS)

    Hopp, Chet J.

    Volcan Baru, in the western province of Chiriqui, is Panama's youngest and most active volcano. Although Baru has experienced no historic eruptions there have been four eruptive episodes in the last 1600 years, the most recent occurring 400-500 years ago (Sherrod et al., 2007). In addition, there have been four reported earthquake swarms in the last 100 years. The most recent swarm occurred in May of 2006, prompting a USGS hazard assessment (Sherrod et al., 2007). In order to characterize local seismicity and provide a reference for future monitoring efforts, we established a seismic network that operated from May 2013 through April 2014. The network consisted of eight temporary single-component, short-period sensors loaned by OSOP Panama, and three permanent stations distributed over a 35 by 15 km area. During operation of the network a catalog of 91 local events were detected, located and then used to calculate a minimum 1-D velocity model for Baru. Of particular interest were a cluster of events west of the town of Boquete. A template matching detection technique was used to identify another 47 smaller magnitude events in the area of this cluster. Spectrograms for the largest events in the cluster show a broad band of frequencies up to ˜20 Hz suggesting a predominantly tectonic source while eight focal mechanisms were calculated which suggest strike-slip and reverse faulting may be the predominant source processes. Further study is encouraged to better constrain the source processes and investigate how volcanic processes might affect local tectonics. 1. 1The material contained in this thesis is intended for submission to the Journal of Volcanology and Geothermal Research .

  8. VLBI tracking of GNSS satellites: recent achievements

    NASA Astrophysics Data System (ADS)

    Liu, Li; Heinkelmann, Robert; Tornatore, Vincenza; Li, Jinling; Mora-Diaz, Julian; Nilsson, Tobias; Karbon, Maria; Raposo-Pulido, Virginia; Soja, Benedikt; Xu, Minghui; Lu, Cuixian; Schuh, Harald

    2014-05-01

    While the ITRF (International Terrestrial Reference Frame) is realized by the combination of the various space geodetic techniques, VLBI (Very Long Baseline Interferometry) is the only technique for determining the ICRF (International Celestial Reference Frame) through its observations of extragalactic radio sources. Therefore, small inconsistencies between the two important frames do exist. According to recent comparisons of parameters derived by GNSS (Global Navigation Satellite Systems) and VLBI (e.g. troposphere delays, gradients, UT1-UTC), evidences of discrepancies obtained by the vast amounts of data become obvious. Terrestrial local ties can provide a way to interlink the otherwise independent technique-specific reference frames but only to some degree. It is evident that errors in the determination of the terrestrial ties, e.g. due to the errors when transforming the locally surveyed coordinates into global Cartesian three dimensional coordinates, introduce significant errors in the combined analysis of space geodetic techniques. A new concept for linking the space geodetic techniques might be to introduce celestial ties, e.g. realized by technique co-location on board of satellites. A small satellite carrying a variety of space geodetic techniques is under investigation at GFZ. Such a satellite would provide a new observing platform with its own additional unknowns, such as the orbit or atmospheric drag parameters. A link of the two techniques VLBI and GNSS might be achieved in a more direct way as well: by VLBI tracking of GNSS satellites. Several tests of this type of observation were already successfully carried out. This new kind of hybrid VLBI-GNSS observation would comprise a new direct inter-technique tie without the involvement of surveying methods and would enable improving the consistency of the two space geodetic techniques VLBI and GNSS, in particular of their celestial frames. Recently the radio telescopes Wettzell and Onsala have successfully observed a GNSS satellite for the first time, using also new receiver developments, done at Wettzell. In this contribution we want to develop the motivation for this kind of innovative observation and we will show first results of the test observations.

  9. Numerical Study on Natural Vacuum Solar Desalination System with Varying Heat Source Temperature

    NASA Astrophysics Data System (ADS)

    Ambarita, H.

    2017-03-01

    A natural vacuum desalination unit with varying low grade heat source temperature is investigated numerically. The objective is to explore the effects of the variable temperature of the low grade heat source on performances and characteristics of the desalination unit. The specifications of the desalination unit are naturally vacuumed with surface area of seawater in evaporator and heating coil are 0.2 m2 and 0.188 m2, respectively. Temperature of the heating coil is simulated based on the solar radiation in the Medan city. A program to solve the governing equations in forward time step marching technique is developed. Temperature of the evaporator, fresh water production rate, and thermal efficiency of the desalination unit are analysed. Simulation is performed for 9 hours, it starts from 8.00 and finishes at 17.00 of local time. The results show that, the desalination unit with operation time of 9 hours can produce 5.705 L of freshwater and thermal efficiency is 81.8 %. This reveals that varying temperature of the heat source of natural vacuum desalination unit shows better performance in comparison with constant temperature of the heat source.

  10. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  11. Local government funding and financing of roads : Virginia case studies and examples from other states.

    DOT National Transportation Integrated Search

    2014-10-01

    Several Virginia localities have used local funding and financing sources to build new roads or complete major street : improvement projects when state and/or federal funding was not available. Many others have combined local funding sources : with s...

  12. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  13. Detailed Aggregate Resources Study, Dry Lake Valley, Nevada.

    DTIC Science & Technology

    1981-05-29

    LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE I COLLECTED WITHIN A FEW MILES OF CORRESPONDING LEDGE-ROCK SOURCES) SUPPLIED FINE MENS...COMPRESSIVE AND TENSILE STh LEDGE-ROCK SOURCES SUPPLIED COARSE AGGREGATES; LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE COLLECTED WITHIN A FEW

  14. The 2016 Kaikōura Earthquake Revealed by Kinematic Source Inversion and Seismic Wavefield Simulations: Slow Rupture Propagation on a Geometrically Complex Crustal Fault Network

    NASA Astrophysics Data System (ADS)

    Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.

    2017-11-01

    The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr < 2 km/s) propagating rupture from south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.

  15. Applying traditional signal processing techniques to social media exploitation for situational understanding

    NASA Astrophysics Data System (ADS)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  16. Carotid lesion characterization by synthetic-aperture-imaging techniques with multioffset ultrasonic probes

    NASA Astrophysics Data System (ADS)

    Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina

    1992-06-01

    This paper explores the applications of a high-resolution imaging technique to vascular ultrasound diagnosis, with emphasis on investigation of the carotid vessel. With the present diagnostic systems, it is difficult to measure quantitatively the extension of the lesions and to characterize the tissue; quantitative images require enough spatial resolution and dynamic to reveal fine high-risk pathologies. A broadband synthetic aperture technique with multi-offset probes is developed to improve the lesion characterization by the evaluation of local scattering parameters. This technique works with weak scatterers embedded in a constant velocity medium, large aperture, and isotropic sources and receivers. The features of this technique are: axial and lateral spatial resolution of the order of the wavelength, high dynamic range, quantitative measurements of the size and scattering intensity of the inhomogeneities, and capabilities of investigation of inclined layer. The evaluation of the performances in real condition is carried out by a software simulator in which different experimental situations can be reproduced. Images of simulated anatomic test-objects are presented. The images are obtained with an inversion process of the synthesized ultrasonic signals, collected on the linear aperture by a limited number of finite size transducers.

  17. Source localization of narrow band signals in multipath environments, with application to marine mammals

    NASA Astrophysics Data System (ADS)

    Valtierra, Robert Daniel

    Passive acoustic localization has benefited from many major developments and has become an increasingly important focus point in marine mammal research. Several challenges still remain. This work seeks to address several of these challenges such as tracking the calling depths of baleen whales. In this work, data from an array of widely spaced Marine Acoustic Recording Units (MARUs) was used to achieve three dimensional localization by combining the methods Time Difference of Arrival (TDOA) and Direct-Reflected Time Difference of Arrival (DRTD) along with a newly developed autocorrelation technique. TDOA was applied to data for two dimensional (latitude and longitude) localization and depth was resolved using DRTD. Previously, DRTD had been limited to pulsed broadband signals, such as sperm whale or dolphin echolocation, where individual direct and reflected signals are separated in time. Due to the length of typical baleen whale vocalizations, individual multipath signal arrivals can overlap making time differences of arrival difficult to resolve. This problem can be solved using an autocorrelation, which can extract reflection information from overlapping signals. To establish this technique, a derivation was made to model the autocorrelation of a direct signal and its overlapping reflection. The model was exploited to derive performance limits allowing for prediction of the minimum resolvable direct-reflected time difference for a known signal type. The dependence on signal parameters (sweep rate, call duration) was also investigated. The model was then verified using both recorded and simulated data from two analysis cases for North Atlantic right whales (NARWs, Eubalaena glacialis) and humpback whales (Megaptera noveaengliae). The newly developed autocorrelation technique was then combined with DRTD and tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The combined DRTD-autocorrelation methods enabled calling depth and range estimations of a vocalizing NARW and humpback whale in two separate cases. The DRTD-autocorrelation method was then combined with TDOA to create a three dimensional track of a NARW in the Stellwagen Bank National Marine Sanctuary. Results from these experiments illustrated the potential of the combined methods to successfully resolve baleen calling depths in three dimensions.

  18. Proposed low-energy absolute calibration of nuclear recoils in a dual-phase noble element TPC using D-D neutron scattering kinematics

    NASA Astrophysics Data System (ADS)

    Verbus, J. R.; Rhyne, C. A.; Malling, D. C.; Genecov, M.; Ghosh, S.; Moskowitz, A. G.; Chan, S.; Chapman, J. J.; de Viveiros, L.; Faham, C. H.; Fiorucci, S.; Huang, D. Q.; Pangilinan, M.; Taylor, W. C.; Gaitskell, R. J.

    2017-04-01

    We propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for WIMP dark matter in the local galactic halo. This technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume. The low-energy reach and reduced systematics of this calibration have particular significance for the low-mass WIMP sensitivity of several leading dark matter experiments. Multiple strategies for improving this calibration technique are discussed, including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272 keV. We report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an Adelphi Technology, Inc. DD108 neutron generator, confirming its suitability for the proposed nuclear recoil calibration.

  19. Nanohole Array-directed Trapping of Mammalian Mitochondria Enabling Single Organelle Analysis

    PubMed Central

    Kumar, Shailabh; Wolken, Gregory G.; Wittenberg, Nathan J.; Arriaga, Edgar A.; Oh, Sang-Hyun

    2016-01-01

    We present periodic nanohole arrays fabricated in free-standing metal-coated nitride films as a platform for trapping and analyzing single organelles. When a microliter-scale droplet containing mitochondria is dispensed above the nanohole array, the combination of evaporation and capillary flow directs individual mitochondria to the nanoholes. Mammalian mitochondria arrays were rapidly formed on chip using this technique without any surface modification steps, microfluidic interconnects or external power sources. The trapped mitochondria were depolarized on chip using an ionophore with results showing that the organelle viability and behavior were preserved during the on-chip assembly process. Fluorescence signal related to mitochondrial membrane potential was obtained from single mitochondria trapped in individual nanoholes revealing statistical differences between the behavior of polarized vs. depolarized mammalian mitochondria. This technique provides a fast and stable route for droplet-based directed localization of organelles-on-a-chip with minimal limitations and complexity, as well as promotes integration with other optical or electrochemical detection techniques. PMID:26593329

  20. Acoustic emission and nondestructive evaluation of biomaterials and tissues.

    PubMed

    Kohn, D H

    1995-01-01

    Acoustic emission (AE) is an acoustic wave generated by the release of energy from localized sources in a material subjected to an externally applied stimulus. This technique may be used nondestructively to analyze tissues, materials, and biomaterial/tissue interfaces. Applications of AE include use as an early warning tool for detecting tissue and material defects and incipient failure, monitoring damage progression, predicting failure, characterizing failure mechanisms, and serving as a tool to aid in understanding material properties and structure-function relations. All these applications may be performed in real time. This review discusses general principles of AE monitoring and the use of the technique in 3 areas of importance to biomedical engineering: (1) analysis of biomaterials, (2) analysis of tissues, and (3) analysis of tissue/biomaterial interfaces. Focus in these areas is on detection sensitivity, methods of signal analysis in both the time and frequency domains, the relationship between acoustic signals and microstructural phenomena, and the uses of the technique in establishing a relationship between signals and failure mechanisms.

Top