Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
NASA Astrophysics Data System (ADS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
Kurz, Jochen H
2015-12-01
The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. Copyright © 2015 Elsevier B.V. All rights reserved.
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
3D source localization of interictal spikes in epilepsy patients with MRI lesions
NASA Astrophysics Data System (ADS)
Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin
2006-08-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Hernandez Bennetts, Victor; Lilienthal, Achim J; Neumann, Patrick P; Trincavelli, Marco
2011-01-01
Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully "translated" into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms.
Hernandez Bennetts, Victor; Lilienthal, Achim J.; Neumann, Patrick P.; Trincavelli, Marco
2011-01-01
Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully “translated” into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms. PMID:22319493
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
NASA Astrophysics Data System (ADS)
Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Xiang, Kui
2016-01-01
Simultaneous-source shooting can help tremendously shorten the acquisition period and improve the quality of seismic data for better subsalt seismic imaging, but at the expense of introducing strong interference (blending noise) to the acquired seismic data. We propose to use a structural-oriented median filter to attenuate the blending noise along the structural direction of seismic profiles. The principle of the proposed approach is to first flatten the seismic record in local spatial windows and then to apply a traditional median filter (MF) to the third flattened dimension. The key component of the proposed approach is the estimation of the local slope, which can be calculated by first scanning the NMO velocity and then transferring the velocity to the local slope. Both synthetic and field data examples show that the proposed approach can successfully separate the simultaneous-source data into individual sources. We provide an open-source toy example to better demonstratethe proposed methodology.
Towards a street-level pollen concentration and exposure forecast
NASA Astrophysics Data System (ADS)
van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard
2015-04-01
Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.
Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.
Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh
2018-04-27
Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.
A real-time biomimetic acoustic localizing system using time-shared architecture
NASA Astrophysics Data System (ADS)
Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn
2008-04-01
In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Elaina, Nor Safira; Malik, Aamir Saeed; Shams, Wafaa Khazaal; Badruddin, Nasreen; Abdullah, Jafri Malin; Reza, Mohammad Faruque
2018-06-01
To localize sensorimotor cortical activation in 10 patients with frontoparietal tumors using quantitative magnetoencephalography (MEG) with noise-normalized approaches. Somatosensory evoked magnetic fields (SEFs) were elicited in 10 patients with somatosensory tumors and in 10 control participants using electrical stimulation of the median nerve via the right and left wrists. We localized the N20m component of the SEFs using dynamic statistical parametric mapping (dSPM) and standardized low-resolution brain electromagnetic tomography (sLORETA) combined with 3D magnetic resonance imaging (MRI). The obtained coordinates were compared between groups. Finally, we statistically evaluated the N20m parameters across hemispheres using non-parametric statistical tests. The N20m sources were accurately localized to Brodmann area 3b in all members of the control group and in seven of the patients; however, the sources were shifted in three patients relative to locations outside the primary somatosensory cortex (SI). Compared with the affected (tumor) hemispheres in the patient group, N20m amplitudes and the strengths of the current sources were significantly lower in the unaffected hemispheres and in both hemispheres of the control group. These results were consistent for both dSPM and sLORETA approaches. Tumors in the sensorimotor cortex lead to cortical functional reorganization and an increase in N20m amplitude and current-source strengths. Noise-normalized approaches for MEG analysis that are integrated with MRI show accurate and reliable localization of sensorimotor function.
NASA Astrophysics Data System (ADS)
Squizzato, Stefania; Masiol, Mauro
2015-10-01
The air quality is influenced by the potential effects of meteorology at meso- and synoptic scales. While local weather and mixing layer dynamics mainly drive the dispersion of sources at small scales, long-range transports affect the movements of air masses over regional, transboundary and even continental scales. Long-range transport may advect polluted air masses from hot-spots by increasing the levels of pollution at nearby or remote locations or may further raise air pollution levels where external air masses originate from other hot-spots. Therefore, the knowledge of ground-wind circulation and potential long-range transports is fundamental not only to evaluate how local or external sources may affect the air quality at a receptor site but also to quantify it. This review is focussed on establishing the relationships among PM2.5 sources, meteorological condition and air mass origin in the Po Valley, which is one of the most polluted areas in Europe. We have chosen the results from a recent study carried out in Venice (Eastern Po Valley) and have analysed them using different statistical approaches to understand the influence of external and local contribution of PM2.5 sources. External contributions were evaluated by applying Trajectory Statistical Methods (TSMs) based on back-trajectory analysis including (i) back-trajectories cluster analysis, (ii) potential source contribution function (PSCF) and (iii) concentration weighted trajectory (CWT). Furthermore, the relationships between the source contributions and ground-wind circulation patterns were investigated by using (iv) cluster analysis on wind data and (v) conditional probability function (CPF). Finally, local source contribution have been estimated by applying the Lenschow' approach. In summary, the integrated approach of different techniques has successfully identified both local and external sources of particulate matter pollution in a European hot-spot affected by the worst air quality.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Classification of event location using matched filters via on-floor accelerometers
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.
2017-04-01
Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.
Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott
2016-04-19
To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
Acoustic source localization in mixed field using spherical microphone arrays
NASA Astrophysics Data System (ADS)
Huang, Qinghua; Wang, Tong
2014-12-01
Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.
Locally Sourced Capital for Small Businesses in Rural Communities
ERIC Educational Resources Information Center
Tampien, Jordan
2016-01-01
Lack of adequate access to capital is a major barrier for rural entrepreneurs. Washington State University Extension and the Association of Washington Cities partnered to explore and test an innovative local investment approach that provides access to capital and engages the community in the success of individual businesses. The approach offers…
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
NASA Astrophysics Data System (ADS)
Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo
2018-05-01
An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote sources of water for precipitation, based on the implementation of passive constituent tracers of water vapor (termed water vapor tracers, WVT) in a general circulation model. In this case, the major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In this approach, each WVT is associated with an evaporative source region, and tracks the water until it precipitates from the atmosphere. By assuming that the regional water is well mixed with water from other sources, the physical processes that act on the WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be computed within the model simulation, and can be validated against the model's prognostic water vapor. Furthermore, estimates of precipitation recycling can be compared with bulk diagnostic approaches. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional tracers, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic 2 regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In general, most North American land regions showed a positive correlation between evaporation and recycling ratio (except the Southeast United States) and negative correlations of recycling ratio with precipitation and moisture transport (except the Southwestern United States). The Midwestern local source is positively correlated with local evaporation, but it is not correlated with water vapor transport. This is contrary to bulk diagnostic estimates of precipitation recycling. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Atlas, Robert (Technical Monitor)
2002-01-01
Precipitation recycling is defined as the amount of water that evaporates from a region that precipitates within the same region. This is also interpreted as the local source of water for precipitation. In this study, the local and remote sources of water for precipitation have been diagnosed through the use of passive constituent tracers that represent regional evaporative sources along with their transport and precipitation. We will discuss the differences between this method and the simpler bulk diagnostic approach to precipitation recycling. A summer seasonal simulation has been analyzed for the regional sources of the United States Great Plains precipitation. While the tropical Atlantic Ocean (including the Gulf of Mexico) and the local continental sources of precipitation are most dominant, the vertically integrated column of water contains substantial water content originating from the Northern Pacific Ocean, which is not precipitated. The vertical profiles of regional water sources indicate that local Great Plains source of water dominates the lower troposphere, predominantly in the PBL. However, the Pacific Ocean source is dominant over a large portion of the middle to upper troposphere. The influence of the tropical Atlantic Ocean is reasonably uniform throughout the column. While the results are not unexpected given the formulation of the model's convective parameterization, the analysis provides a quantitative assessment of the impact of local evaporation on the occurrence of convective precipitation in the GCM. Further, these results suggest that local source of water is not well mixed throughout the vertical column.
Localization under Adversary Misdirection
2014-10-01
0.5 1 −1 −0.5 0 0.5 1 x [meters] y [m et er s ] (a) eadv = [0 0.05] (b) eadv= [0 0.36] ( c ) eadv = [0 1] Figure 18: Maximum distance...2009. [47] C . Meesookho, U. Mitra, and S . Narayanan. On energy-based acoustic source localization for sensor networks. IEEE Transactions on Signal...Processing, 56(1):365–377, 2008. [48] C . Meng, Z. Ding, and S . Dasgupta. A semidefinite programming approach to source localization in wireless sensor
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried D.; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of surface evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote geographic sources of surface evaporation for precipitation, based on the implementation of three-dimensional constituent tracers of regional water vapor sources (termed water vapor tracers, WVT) in a general circulation model. The major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In the WVT approach, each tracer is associated with an evaporative source region for a prognostic three-dimensional variable that represents a partial amount of the total atmospheric water vapor. The physical processes that act on a WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be predicted within the model simulation, and can be validated against the model's prognostic water vapor. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional sources, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In most North American continental regions, the local source of precipitation is correlated with total precipitation. There is a general positive correlation between local evaporation and local precipitation, but it can be weaker because large evaporation can occur when precipitation is inhibited. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-10-27
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-01-01
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071
Henry, Ronald C; Vette, Alan; Norris, Gary; Vedantham, Ram; Kimbrough, Sue; Shores, Richard C
2011-12-15
Nonparametric Trajectory Analysis (NTA), a receptor-oriented model, was used to assess the impact of local sources of air pollution at monitoring sites located adjacent to highway I-15 in Las Vegas, NV. Measurements of black carbon, carbon monoxide, nitrogen oxides, and sulfur dioxide concentrations were collected from December 2008 to December 2009. The purpose of the study was to determine the impact of the highway at three downwind monitoring stations using an upwind station to measure background concentrations. NTA was used to precisely determine the contribution of the highway to the average concentrations measured at the monitoring stations accounting for the spatially heterogeneous contributions of other local urban sources. NTA uses short time average concentrations, 5 min in this case, and constructed local back-trajectories from similarly short time average wind speed and direction to locate and quantify contributions from local source regions. Averaged over an entire year, the decrease of concentrations with distance from the highway was found to be consistent with previous studies. For this study, the NTA model is shown to be a reliable approach to quantify the impact of the highway on local air quality in an urban area with other local sources.
Global Dynamic Exposure and the OpenBuildingMap
NASA Astrophysics Data System (ADS)
Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.
2015-12-01
Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.
How Do Teachers Coordinate Their Work? A Framing Approach
ERIC Educational Resources Information Center
Dumay, Xavier
2014-01-01
Since the 1970s, schools have been characterized as loosely coupled systems, meaning that the teachers' work is weakly coordinated at the local level. Nonetheless, few studies have focused on the local variations of coordination modes, their sources and their nature. In this article, the process of local coordination of the teachers' work is…
Identity Formation of American Indian Adolescents: Local, National, and Global Considerations
ERIC Educational Resources Information Center
Markstrom, Carol A.
2011-01-01
A conceptual model is presented that approaches identity formation of American Indian adolescents according to 3 levels of social contextual influence--local, national, and global--relative to types of identity, dynamics of identity, and sources of influence. Ethnic identity of American Indians is embedded within the local cultural milieu and…
NASA Astrophysics Data System (ADS)
Squizzato, Stefania; Cazzaro, Marta; Innocente, Elena; Visin, Flavia; Hopke, Philip K.; Rampazzo, Giancarlo
2017-04-01
Urban air quality represents a major public health burden and is a long-standing concern to European citizens. Combustion processes and traffic-related emissions represent the main primary particulate matter (PM) sources in urban areas. Other sources can also affect air quality (e.g., secondary aerosol, industrial) depending on the characteristics of the study area. Thus, the identification and the apportionment of all sources is of crucial importance to make effective corrective decisions within environmental policies. The aim of this study is to evaluate the impacts of different emissions sources on PM2.5 concentrations and compositions in a mid-size city in the Po Valley (Treviso, Italy). Data have been analyzed to highlight compositional differences (elements and major inorganic ions), to determine PM2.5 sources and their contributions, and to evaluate the influence of air mass movements. Non-parametric tests, positive matrix factorization (PMF), conditional bivariate probability function (CBPF), and concentration weighted trajectory (CWT) have been used in a multi-chemometrics approach to understand the areal-scale (proximate, local, long-range) where different sources act on PM2.5 levels and composition. Results identified three levels of scale from which the pollution arose: (i) a proximate local scale (close to the sampling site) for traffic non-exhaust and resuspended dust sources; (ii) a local urban scale (including both sampling site and areas close to them) for combustion and industrial; and (iii) a regional scale characterized by ammonium nitrate and ammonium sulfate. This approach and results can help to develop and adopt better air quality policy action.
Ambient Sound-Based Collaborative Localization of Indeterministic Devices
Kamminga, Jacob; Le, Duc; Havinga, Paul
2016-01-01
Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176
An Integrated Approach to Indoor and Outdoor Localization
2017-04-17
localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to
Blast and Fragment Protective Sandwich Panel Concepts for Stainless Steel Monohull Designs
2008-10-21
to draw broader conclusions. 8. Concluding remarks The resistance of metallic sandwich panels to localized spherical impulsive sources has been...hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...applications and ship hull blister attachments. Technical Approach The approach used in this research program exploited progress made in metallic
Forward the Foundation: Local Education Foundations Offer an Alternative Source for School Funding
ERIC Educational Resources Information Center
Brooks-Young, Susan
2007-01-01
February's column "Going Corporate" discussed ideas for approaching private foundations for funding. Some districts take this idea several steps further by partnering with the community and local businesses to establish a not-for-profit foundation, or local education foundation (LEF). It probably comes as no surprise that the idea of forming a LEF…
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
Local and regional factors affecting atmospheric mercury speciation at a remote location
Manolopoulos, H.; Schauer, J.J.; Purcell, M.D.; Rudolph, T.M.; Olson, M.L.; Rodger, B.; Krabbenhoft, D.P.
2007-01-01
Atmospheric concentrations of elemental (Hg0), reactive gaseous (RGM), and particulate (PHg) mercury were measured at two remote sites in the midwestern United States. Concurrent measurements of Hg0, PHg, and RGM obtained at Devil's Lake and Mt. Horeb, located approximately 65 km apart, showed that Hg0 and PHg concentrations were affected by regional, as well as local sources, while RGM was mainly impacted by local sources. Plumes reaching the Devil's Lake site from a nearby coal-fired power plant significantly impacted SO2 and RGM concentrations at Devil's Lake, but had little impact on Hg0. Our findings suggest that traditional modeling approaches to assess sources of mercury deposited that utilize source emissions and large-scale grids may not be sufficient to predict mercury deposition at sensitive locations due to the importance of small-scale sources and processes. We suggest the use of a receptor-based monitoring to better understand mercury source-receptor relationships. ?? 2007 NRC Canada.
Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.
Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G
2018-01-01
Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less
Fusing the Career Education Concept into the Fiber of the State Educational System.
ERIC Educational Resources Information Center
Bottoms, Gene
The approach of Georgia's career education program is one in which the state leadership serves first as a catalyst in stimulating local educators to re-examine the educational needs of their students, and second, as a source of assistance to local educators as they think through, within the context of their local environments, the changes they…
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
Writing in a History of Mathematics Capstone Course
ERIC Educational Resources Information Center
Carter, John
2014-01-01
This article presents two approaches to using original sources in a capstone writing project for a History of Mathematics course. One approach involves searching local libraries and is best suited to schools in metropolitan areas. A second approach involves online resources available anywhere. Both projects were used in a course intended for…
Chowdhury, Rasheda Arman; Zerouali, Younes; Hedrich, Tanguy; Heers, Marcel; Kobayashi, Eliane; Lina, Jean-Marc; Grova, Christophe
2015-11-01
The purpose of this study is to develop and quantitatively assess whether fusion of EEG and MEG (MEEG) data within the maximum entropy on the mean (MEM) framework increases the spatial accuracy of source localization, by yielding better recovery of the spatial extent and propagation pathway of the underlying generators of inter-ictal epileptic discharges (IEDs). The key element in this study is the integration of the complementary information from EEG and MEG data within the MEM framework. MEEG was compared with EEG and MEG when localizing single transient IEDs. The fusion approach was evaluated using realistic simulation models involving one or two spatially extended sources mimicking propagation patterns of IEDs. We also assessed the impact of the number of EEG electrodes required for an efficient EEG-MEG fusion. MEM was compared with minimum norm estimate, dynamic statistical parametric mapping, and standardized low-resolution electromagnetic tomography. The fusion approach was finally assessed on real epileptic data recorded from two patients showing IEDs simultaneously in EEG and MEG. Overall the localization of MEEG data using MEM provided better recovery of the source spatial extent, more sensitivity to the source depth and more accurate detection of the onset and propagation of IEDs than EEG or MEG alone. MEM was more accurate than the other methods. MEEG proved more robust than EEG and MEG for single IED localization in low signal-to-noise ratio conditions. We also showed that only few EEG electrodes are required to bring additional relevant information to MEG during MEM fusion.
Muhammadi; Afzal, Muhammad
2014-01-01
Optimum culture conditions, and carbon and nitrogen sources for production of water absorbing exopolysaccharide by Bacillus strain CMG1403 on local cheap substrates were determined using one variable at a time approach. Carbon source was found to be sole substrate for EPS biosynthesis in the presence of yeast extract that supported the growth only and hence, indirectly enhanced the EPS yield. Whereas, urea only coupled with carbon source could enhance the EPS production but no effect on growth. The maximum yield of EPS was obtained when Bacillus strain CMG1403 was grown statically in neutral minimal medium with 25% volumetric aeration at 30°C for 10 days. Under these optimum conditions, a maximum yield of 2.71±0.024, 3.82±0.005, 4.33±0.021, 4.73±0.021, 4.85±0.024, and 5.52±0.016 g/L culture medium was obtained with 20 g (sugar) of sweet whey, glucose, fructose, sucrose, cane molasses and sugar beet the most efficient one respectively as carbon sources. Thus, the present study showed that under optimum culture conditions, the local cheap substrates could be superior and efficient alternatives to synthetic carbon sources providing way for an economical production of water absorbing EPS by indigenous soil bacterium Bacillus strain CMG1403.
Modelling and approaching pragmatic interoperability of distributed geoscience data
NASA Astrophysics Data System (ADS)
Ma, Xiaogang
2010-05-01
Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location, intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-09-01
Tracking edge-reflected acoustic emission (AE) waves can allow the localization of their sources. Specifically, in bounded isotropic plate structures, only one sensor may be used to perform these source localizations. The primary goal of this paper is to develop a three-step probabilistic framework to quantify the uncertainties associated with such single-sensor localizations. According to this framework, a probabilistic approach is first used to estimate the direct distances between AE sources and the sensor. Then, an analytical model is used to reconstruct the envelope of edge-reflected AE signals based on the source-to-sensor distance estimations and their first arrivals. Finally, the correlation between the probabilistically reconstructed envelopes and recorded AE signals are used to estimate confidence contours for the location of AE sources. To validate the proposed framework, Hsu-Nielsen pencil lead break (PLB) tests were performed on the surface as well as the edges of an aluminum plate. The localization results show that the estimated confidence contours surround the actual source locations. In addition, the performance of the framework was tested in a noisy environment simulated by two dummy transducers and an arbitrary wave generator. The results show that in low-noise environments, the shape and size of the confidence contours depend on the sources and their locations. However, at highly noisy environments, the size of the confidence contours monotonically increases with the noise floor. Such probabilistic results suggest that the proposed probabilistic framework could thus provide more comprehensive information regarding the location of AE sources.
Free-flying experiment to measure the Schawlow-Townes linewidth limit of a 300 THz laser oscillator
NASA Technical Reports Server (NTRS)
Byer, R. L.; Byvik, C. E.
1988-01-01
Recent advances in laser diode-pumped solid state laser sources permit the design and testing of laser sources with linewidths that approach the Schawlow-Townes limit of 1 Hz/mW of output power. Laser diode pumped solid state ring oscillators have been operated with CW output power levels of 25 mW at electrical efficiencies that exceed 6 percent. These oscillators are expected to operate for lifetimes that approach those of the laser diode sources which is now approaching 20,000 hours. The efficiency and lifetime of these narrow linewidth laser sources will enable space measurements of gravity waves, remote sensing applications (including local range rate and measurements), and laser sources for frequency and time standards. A free-flight experiment, 'SUNLITE', is being designed to measure the linewidth of this all-solid-state laser system.
Stable isotopes to trace food web stressors: Is space the final frontier?
To support community decision-making, we need to evaluate sources of stress and impact at a variety of spatial scales, whether local or watershed-based. Increasingly, we are using stable isotope-based approaches to determine those scales of impact, and using these approaches in v...
Locating hydrothermal acoustic sources at Old Faithful Geyser using Matched Field Processing
NASA Astrophysics Data System (ADS)
Cros, E.; Roux, P.; Vandemeulebrouck, J.; Kedar, S.
2011-10-01
In 1992, a large and dense array of geophones was placed around the geyser vent of Old Faithful, in the Yellowstone National Park, to determine the origin of the seismic hydrothermal noise recorded at the surface of the geyser and to understand its dynamics. Old Faithful Geyser (OFG) is a small-scale hydrothermal system where a two-phase flow mixture erupts every 40 to 100 min in a high continuous vertical jet. Using Matched Field Processing (MFP) techniques on 10-min-long signal, we localize the source of the seismic pulses recorded at the surface of the geyser. Several MFP approaches are compared in this study, the frequency-incoherent and frequency-coherent approach, as well as the linear Bartlett processing and the non-linear Minimum Variance Distorsionless Response (MVDR) processing. The different MFP techniques used give the same source position with better focalization in the case of the MVDR processing. The retrieved source position corresponds to the geyser conduit at a depth of 12 m and the localization is in good agreement with in situ measurements made at Old Faithful in past studies.
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA
NASA Astrophysics Data System (ADS)
Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.
2017-12-01
Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
Wideband RELAX and wideband CLEAN for aeroacoustic imaging
NASA Astrophysics Data System (ADS)
Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu
2004-02-01
Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.
Wideband RELAX and wideband CLEAN for aeroacoustic imaging.
Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu
2004-02-01
Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.
Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can
2016-01-01
In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
Meals Plus Program--An Innovative Approach for Providing Local Produce to Meals on Wheels Clients
ERIC Educational Resources Information Center
Wagner, Katie
2016-01-01
Extension programming often strives to achieve maximum impact on a minimum budget, but when it comes to sourcing local produce, asking farmers to donate sellable commodity can result in a dip into their profits. By connecting community partners, Extension can facilitate collaborations that work in the favor of all participating parties and…
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Harvesting implementation for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo
2010-05-01
GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
The Art and Science of Rain Barrels: A Service Learning Approach to Youth Watershed Action
ERIC Educational Resources Information Center
Rector, Patricia; Lyons, Rachel; Yost, Theresa
2013-01-01
Using an interdisciplinary approach to water resource education, 4-H Youth Development and Environmental Extension agents enlisted 4-H teens to connect local watershed education with social action. Teens participated in a dynamic service learning project that included learning about nonpoint source pollution; constructing, decorating, and teaching…
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.
2017-09-01
Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.
Farm to Institution: Creating Access to Healthy Local and Regional Foods12
Harris, Diane; Lott, Megan; Lakins, Velma; Bowden, Brian; Kimmons, Joel
2012-01-01
Farm to Institution (FTI) programs are one approach to align food service operations with health and sustainability guidelines, such as those recently developed by the U.S. Department of Health and Human Services and General Services Administration. Programs and policies that support sourcing local and regional foods for schools, hospitals, faith-based organizations, and worksites may benefit institutional customers and their families, farmers, the local community, and the economy. Different models of FTI programs exist. On-site farmer’s markets at institutions have been promoted on federal government property, healthcare facilities, and private institutions nationwide. Farm to School programs focus on connecting schools with local agricultural production with the goal of improving school meals and increasing intake of fruits and vegetables in children. Sourcing food from local farms presents a number of challenges including cost and availability of local products, food safety, and liability considerations and lack of skilled labor for food preparation. Institutions utilize multiple strategies to address these barriers, and local, state, and federal polices can help facilitate FTI approaches. FTI enables the purchasing power of institutions to contribute to regional and local food systems, thus potentially affecting social, economic, and ecological systems. Local and state food policy councils can assist in bringing stakeholders together to inform this process. Rigorous research and evaluation is needed to determine and document best practices and substantiate links between FTI and multiple outcomes. Nutritionists, public health practitioners, and researchers can help communities work with institutions to develop, implement, and evaluate programs and policies supporting FTI. PMID:22585910
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Biocultural approaches to well-being and sustainability indicators across scales.
Sterling, Eleanor J; Filardi, Christopher; Toomey, Anne; Sigouin, Amanda; Betley, Erin; Gazit, Nadav; Newell, Jennifer; Albert, Simon; Alvira, Diana; Bergamini, Nadia; Blair, Mary; Boseto, David; Burrows, Kate; Bynum, Nora; Caillon, Sophie; Caselle, Jennifer E; Claudet, Joachim; Cullman, Georgina; Dacks, Rachel; Eyzaguirre, Pablo B; Gray, Steven; Herrera, James; Kenilorea, Peter; Kinney, Kealohanuiopuna; Kurashima, Natalie; Macey, Suzanne; Malone, Cynthia; Mauli, Senoveva; McCarter, Joe; McMillen, Heather; Pascua, Pua'ala; Pikacha, Patrick; Porzecanski, Ana L; de Robert, Pascale; Salpeteur, Matthieu; Sirikolo, Myknee; Stege, Mark H; Stege, Kristina; Ticktin, Tamara; Vave, Ron; Wali, Alaka; West, Paige; Winter, Kawika B; Jupiter, Stacy D
2017-12-01
Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and scientists. Sets of indicators that capture both ecological and social-cultural factors, and the feedbacks between them, can underpin cross-scale linkages that help bridge local and global scale initiatives to increase resilience of both humans and ecosystems. Here we argue that biocultural approaches, in combination with methods for synthesizing across evidence from multiple sources, are critical to developing metrics that facilitate linkages across scales and dimensions. Biocultural approaches explicitly start with and build on local cultural perspectives - encompassing values, knowledges, and needs - and recognize feedbacks between ecosystems and human well-being. Adoption of these approaches can encourage exchange between local and global actors, and facilitate identification of crucial problems and solutions that are missing from many regional and international framings of sustainability. Resource managers, scientists, and policymakers need to be thoughtful about not only what kinds of indicators are measured, but also how indicators are designed, implemented, measured, and ultimately combined to evaluate resource use and well-being. We conclude by providing suggestions for translating between local and global indicator efforts.
Santika, Otte; Fahmida, Umi; Ferguson, Elaine L
2009-01-01
Effective population-specific, food-based complementary feeding recommendations (CFR) are required to combat micronutrient deficiencies. To facilitate their formulation, a modeling approach was recently developed. However, it has not yet been used in practice. This study therefore aimed to use this approach to develop CFR for 9- to 11-mo-old Indonesian infants and to identify nutrients that will likely remain low in their diets. The CFR were developed using a 4-phase approach based on linear and goal programming. Model parameters were defined using dietary data collected in a cross-sectional survey of 9- to 11-mo-old infants (n = 100) living in the Bogor District, West-Java, Indonesia and a market survey of 3 local markets. Results showed theoretical iron requirements could not be achieved using local food sources (highest level achievable, 63% of recommendations) and adequate levels of iron, niacin, zinc, and calcium were difficult to achieve. Fortified foods, meatballs, chicken liver, eggs, tempe-tofu, banana, and spinach were the best local food sources to improve dietary quality. The final CFR were: breast-feed on demand, provide 3 meals/d, of which 1 is a fortified infant cereal; > or = 5 servings/wk of tempe/tofu; > or = 3 servings/wk of animal-source foods, of which 2 servings/wk are chicken liver; vegetables, daily; snacks, 2 times/d, including > or = 2 servings/wk of banana; and > or = 4 servings/wk of fortified-biscuits. Results showed that the approach can be used to objectively formulate population-specific CFR and identify key problem nutrients to strengthen nutrition program planning and policy decisions. Before recommending these CFR, their long-term acceptability, affordability, and effectiveness should be assessed.
Quantifying sources of methane and light alkanes in the Los Angeles Basin, California
NASA Astrophysics Data System (ADS)
Peischl, Jeff; Ryerson, Thomas; Atlas, Elliot; Blake, Donald; Brioude, Jerome; Daube, Bruce; de Gouw, Joost; Frost, Gregory; Gentner, Drew; Gilman, Jessica; Goldstein, Allen; Harley, Robert; Holloway, John; Kuster, William; Santoni, Gregory; Trainer, Michael; Wofsy, Steven; Parrish, David
2013-04-01
We use ambient measurements to apportion the relative contributions of different source sectors to the methane (CH4) emissions budget of a U.S. megacity. This approach uses ambient measurements of methane and C2-C5 alkanes (ethane through pentanes) and includes source composition information to distinguish between methane emitted from landfills and feedlots, wastewater treatment plants, tailpipe emissions, leaks of dry natural gas in pipelines and/or local seeps, and leaks of locally produced (unprocessed) natural gas. Source composition information can be taken from existing tabulations or developed by direct sampling of emissions using a mobile platform. By including C2-C5 alkane information, a linear combination of these source signatures can be found to match the observed atmospheric enhancement ratios to determine relative emissions strengths. We apply this technique to apportion CH4 emissions in Los Angeles, CA (L.A.) using data from the CalNex field project in 2010. Our analysis of L.A. atmospheric data shows the two largest CH4 sources in the city are emissions of gas from pipelines and/or from geologic seeps (47%), and emissions from landfills (40%). Local oil and gas production is a relatively minor source of CH4, contributing 8% of total CH4 emissions in L.A. Absolute CH4 emissions rates are derived by multiplying the observed CH4/CO enhancement ratio by State of California inventory values for carbon monoxide (CO) emissions in Los Angeles. Apportioning this total suggests that emissions from the combined natural and anthropogenic gas sources account for the differences between top-down and bottom-up CH4 estimates previously published for Los Angeles. Further, total CH4 emission attributed in our analysis to local gas extraction represents 17% of local production. While a derived leak rate of 17% of local production may seem unrealistically high, it is qualitatively consistent with the 12% reported in a recent state inventory survey of the L.A. oil and gas industry.
ERIC Educational Resources Information Center
Hasanov, Elnur L.
2016-01-01
In this scientific paper for the first time were investigated the innovative approach of the local literary heritage of Ganja on the basis of various historic sources as manuscripts and archive materials. Have been researched the comparative materials of such poems as "Treasury of mysteries" and "Iskandername". Also were…
VoroTop: Voronoi cell topology visualization and analysis toolkit
NASA Astrophysics Data System (ADS)
Lazar, Emanuel A.
2018-01-01
This paper introduces a new open-source software program called VoroTop, which uses Voronoi topology to analyze local structure in atomic systems. Strengths of this approach include its abilities to analyze high-temperature systems and to characterize complex structure such as grain boundaries. This approach enables the automated analysis of systems and mechanisms previously not possible.
ERIC Educational Resources Information Center
De Chiara, Alessandra
2017-01-01
Environmental pollution occurring in industrial districts represents a serious issue not only for local communities but also for those industrial productions that draw from the territory the source of their competitiveness. Due to its ability to take into account the needs of different stakeholders, the collective impact approach has the potential…
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Strobbe, Gregor; Carrette, Evelien; López, José David; Montes Restrepo, Victoria; Van Roost, Dirk; Meurs, Alfred; Vonck, Kristl; Boon, Paul; Vandenberghe, Stefaan; van Mierlo, Pieter
2016-01-01
Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP) approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i) an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii) an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii) an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time epochs were in the same range as the LORETA and ECD techniques. We found distances smaller than 23 mm, with robust results for all the patients. For the finite difference models, we found that the distances to the resection border for the MSVP inversions of the full spike time epochs were generally smaller compared to the MSVP inversions of the time epochs before the spike peak. The results also suggest that the inversions using the finite difference models resulted in slightly smaller distances to the resection border compared to the spherical models. The results we obtained are promising because the MSVP approach allows to study the network of the estimated source-intensities and allows to characterize the spatial extent of the underlying sources. PMID:26958464
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Krall, J. R.; Hackstadt, A. J.; Peng, R. D.
2017-01-01
Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412
NASA Astrophysics Data System (ADS)
Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef
2018-05-01
This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.
A System Dynamics Approach for Information Technology Implementation and Sustainment
2003-03-01
mass media in nature, or (2) originating from either local or cosmopolite sources (Rogers, 1995). Mass media channels are a means of transmitting...interpersonal channels are more important at the persuasion stage in the innovation-decision process (Rogers, 1995). Cosmopolite communication channels...are those from outside the social system of study” (Rogers, 1995:196). Interpersonal channels can be either local or cosmopolite , whereas mass media
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.
Aarabi, A; Grebe, R; Berquin, P; Bourel Ponchel, E; Jalin, C; Fohlen, M; Bulteau, C; Delalande, O; Gondry, C; Héberlé, C; Moullart, V; Wallois, F
2012-06-01
This case study aims to demonstrate that spatiotemporal spike discrimination and source analysis are effective to monitor the development of sources of epileptic activity in time and space. Therefore, they can provide clinically useful information allowing a better understanding of the pathophysiology of individual seizures with time- and space-resolved characteristics of successive epileptic states, including interictal, preictal, postictal, and ictal states. High spatial resolution scalp EEGs (HR-EEG) were acquired from a 2-year-old girl with refractory central epilepsy and single-focus seizures as confirmed by intracerebral EEG recordings and ictal single-photon emission computed tomography (SPECT). Evaluation of HR-EEG consists of the following three global steps: (1) creation of the initial head model, (2) automatic spike and seizure detection, and finally (3) source localization. During the source localization phase, epileptic states are determined to allow state-based spike detection and localization of underlying sources for each spike. In a final cluster analysis, localization results are integrated to determine the possible sources of epileptic activity. The results were compared with the cerebral locations identified by intracerebral EEG recordings and SPECT. The results obtained with this approach were concordant with those of MRI, SPECT and distribution of intracerebral potentials. Dipole cluster centres found for spikes in interictal, preictal, ictal and postictal states were situated an average of 6.3mm from the intracerebral contacts with the highest voltage. Both amplitude and shape of spikes change between states. Dispersion of the dipoles was higher in the preictal state than in the postictal state. Two clusters of spikes were identified. The centres of these clusters changed position periodically during the various epileptic states. High-resolution surface EEG evaluated by an advanced algorithmic approach can be used to investigate the spatiotemporal characteristics of sources located in the epileptic focus. The results were validated by standard methods, ensuring good spatial resolution by MRI and SPECT and optimal temporal resolution by intracerebral EEG. Surface EEG can be used to identify different spike clusters and sources of the successive epileptic states. The method that was used in this study will provide physicians with a better understanding of the pathophysiological characteristics of epileptic activities. In particular, this method may be useful for more effective positioning of implantable intracerebral electrodes. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Ewald, Arne; Avarvand, Forooz Shahbazi; Nolte, Guido
2014-11-01
We introduce a novel method to estimate bivariate synchronization, i.e. interacting brain sources at a specific frequency or band, from MEG or EEG data robust to artifacts of volume conduction. The data driven calculation is solely based on the imaginary part of the cross-spectrum as opposed to the imaginary part of coherency. In principle, the method quantifies how strong a synchronization between a distinct pair of brain sources is present in the data. As an input of the method all pairs of pre-defined locations inside the brain can be used which is computationally exhaustive. In contrast to that, reference sources can be used that have been identified by any source reconstruction technique in a prior analysis step. We introduce different variants of the method and evaluate the performance in simulations. As a particular advantage of the proposed methodology, we demonstrate that the novel approach is capable of investigating differences in brain source interactions between experimental conditions or with respect to a certain baseline. For measured data, we first show the application on resting state MEG data where we find locally synchronized sources in the motor-cortex based on the sensorimotor idle rhythms. Finally, we show an example on EEG motor imagery data where we contrast hand and foot movements. Here, we also find local interactions in the expected brain areas. Copyright © 2014. Published by Elsevier Inc.
An integrated approach using high time-resolved tools to study the origin of aerosols.
Di Gilio, A; de Gennaro, G; Dambruoso, P; Ventrella, G
2015-10-15
Long-range transport of natural and/or anthropogenic particles can contribute significantly to PM10 and PM2.5 concentrations and some European cities often fail to comply with PM daily limit values due to the additional impact of particles from remote sources. For this reason, reliable methodologies to identify long-range transport (LRT) events would be useful to better understand air pollution phenomena and support proper decision-making. This study explores the potential of an integrated and high time-resolved monitoring approach for the identification and characterization of local, regional and long-range transport events of high PM. In particular, the goal of this work was also the identification of time-limited event. For this purpose, a high time-resolved monitoring campaign was carried out at an urban background site in Bari (southern Italy) for about 20 days (1st-20th October 2011). The integration of collected data as the hourly measurements of inorganic ions in PM2.5 and their gas precursors and of the natural radioactivity, in addition to the analyses of aerosol maps and hourly back trajectories (BT), provided useful information for the identification and chemical characterization of local sources and trans-boundary intrusions. Non-sea salt (nss) sulfate levels were found to increase when air masses came from northeastern Europe and higher dispersive conditions of the atmosphere were detected. Instead, higher nitrate and lower nss-sulfate concentrations were registered in correspondence with air mass stagnation and attributed to local traffic source. In some cases, combinations of local and trans-boundary sources were observed. Finally, statistical investigations such as the principal component analysis (PCA) applied on hourly ion concentrations and the cluster analyses, the Potential Source Contribution Function (PSCF) and the Concentration Weighted Trajectory (CWT) models computed on hourly back-trajectories enabled to complete a cognitive framework and confirm the influence of aerosol transported from heavily polluted areas on the receptor site. Copyright © 2015 Elsevier B.V. All rights reserved.
Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane
2016-05-01
Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
Image authentication using distributed source coding.
Lin, Yao-Chung; Varodayan, David; Girod, Bernd
2012-01-01
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.
Francová, Anna; Chrastný, Vladislav; Šillerová, Hana; Vítková, Martina; Kocourková, Jana; Komárek, Michael
2017-01-01
Samples of lichens, snow and particulate matter (PM 10 , 24 h) are used for the source identification of air pollution in the heavily industrialized region of Ostrava, Upper Silesia, Czech Republic. An integrated approach that uses different environmental samples for metal concentration and Pb isotope analyses was applied. The broad range of isotope ratios in the samples indicates a combination of different pollution sources, the strongest among them being the metallurgical industry, bituminous coal combustion and traffic. Snow samples are proven as the most relevant indicator for tracing metal(loid)s and recent local contamination in the atmosphere. Lichens can be successfully used as tracers of the long-term activity of local and remote sources of contamination. The combination of PM 10 with snow can provide very useful information for evaluation of current pollution sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
EEG and MEG source localization using recursively applied (RAP) MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.
2016-12-01
Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal deposition rate exhibits changes over time coincident with major climatic and vegetation shifts over the past 10,000 years. Ongoing work will use the regional charcoal deposition rate to estimate changes in biomass burned as a function of climate variability and regional vegetation pattern.
A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.
Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying
2018-06-13
The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.
NASA Astrophysics Data System (ADS)
Nugroho, P.
2018-02-01
Creative industries existence is inseparable from the underlying social construct which provides sources for creativity and innovation. The working of social capital in a society facilitates information exchange, knowledge transfer and technology acquisition within the industry through social networks. As a result, a socio-spatial divide exists in directing the growth of the creative industries. This paper aims to examine how such a socio-spatial divide contributes to the local creative industry development in Semarang and Kudus batik clusters. Explanatory sequential mixed methods approach covering a quantitative approach followed by a qualitative approach is chosen to understand better the interplay between tangible and intangible variables in the local batik clusters. Surveys on secondary data taken from the government statistics and reports, previous studies, and media exposures are completed in the former approach to identify clustering pattern of the local batik industry and the local embeddedness factors which have shaped the existing business environment. In-depth interviews, content analysis, and field observations are engaged in the latter approach to explore reciprocal relationships between the elements of social capital and the local batik cluster development. The result demonstrates that particular social ties have determined the forms of spatial proximity manifested in forward and backward business linkages. Trust, shared norms, and inherited traditions are the key social capital attributes that lead to such a socio-spatial divide. Therefore, the intermediating roles of the bridging actors are necessary to encouraging cooperation among the participating stakeholders for a better cluster development.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
Tadić, Jovan M; Michalak, Anna M; Iraci, Laura; Ilić, Velibor; Biraud, Sébastien C; Feldman, Daniel R; Bui, Thaopaul; Johnson, Matthew S; Loewenstein, Max; Jeong, Seongeun; Fischer, Marc L; Yates, Emma L; Ryoo, Ju-Mee
2017-09-05
In this study, we explore observational, experimental, methodological, and practical aspects of the flux quantification of greenhouse gases from local point sources by using in situ airborne observations, and suggest a series of conceptual changes to improve flux estimates. We address the major sources of uncertainty reported in previous studies by modifying (1) the shape of the typical flight path, (2) the modeling of covariance and anisotropy, and (3) the type of interpolation tools used. We show that a cylindrical flight profile offers considerable advantages compared to traditional profiles collected as curtains, although this new approach brings with it the need for a more comprehensive subsequent analysis. The proposed flight pattern design does not require prior knowledge of wind direction and allows for the derivation of an ad hoc empirical correction factor to partially alleviate errors resulting from interpolation and measurement inaccuracies. The modified approach is applied to a use-case for quantifying CH 4 emission from an oil field south of San Ardo, CA, and compared to a bottom-up CH 4 emission estimate.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
Stoeckel, D.M.; Stelzer, E.A.; Stogner, R.W.; Mau, D.P.
2011-01-01
Protocols for microbial source tracking of fecal contamination generally are able to identify when a source of contamination is present, but thus far have been unable to evaluate what portion of fecal-indicator bacteria (FIB) came from various sources. A mathematical approach to estimate relative amounts of FIB, such as Escherichia coli, from various sources based on the concentration and distribution of microbial source tracking markers in feces was developed. The approach was tested using dilute fecal suspensions, then applied as part of an analytical suite to a contaminated headwater stream in the Rocky Mountains (Upper Fountain Creek, Colorado). In one single-source fecal suspension, a source that was not present could not be excluded because of incomplete marker specificity; however, human and ruminant sources were detected whenever they were present. In the mixed-feces suspension (pet and human), the minority contributor (human) was detected at a concentration low enough to preclude human contamination as the dominant source of E. coli to the sample. Without the semi-quantitative approach described, simple detects of human-associated marker in stream samples would have provided inaccurate evidence that human contamination was a major source of E. coli to the stream. In samples from Upper Fountain Creek the pattern of E. coli, general and host-associated microbial source tracking markers, nutrients, and wastewater-associated chemical detections-augmented with local observations and land-use patterns-indicated that, contrary to expectations, birds rather than humans or ruminants were the predominant source of fecal contamination to Upper Fountain Creek. This new approach to E. coli allocation, validated by a controlled study and tested by application in a relatively simple setting, represents a widely applicable step forward in the field of microbial source tracking of fecal contamination. ?? 2011 Elsevier Ltd.
Treatment of Atrial Fibrillation By The Ablation Of Localized Sources
Narayan, Sanjiv M.; Krummen, David E.; Shivkumar, Kalyanam; Clopton, Paul; Rappel, Wouter-Jan; Miller, John M.
2012-01-01
Objectives We hypothesized that human atrial fibrillation (AF) may be sustained by localized sources (electrical rotors and focal impulses), whose elimination (Focal Impulse and Rotor Modulation, FIRM) may improve outcome from AF ablation. Background Catheter ablation for AF is a promising therapy, whose success is limited in part by uncertainty in the mechanisms that sustain AF. We developed a computational approach to map whether AF is sustained by several meandering waves (the prevailing hypothesis) or localized sources, then prospectively tested whether targeting patient-specific mechanisms revealed by mapping would improve AF ablation outcome. Methods We recruited 92 individuals during 107 consecutive ablation procedures for paroxysmal or persistent (72%) AF. Cases were prospectively treated, in a 2-arm 1:2 design, by ablation at sources (FIRM-Guided) followed by conventional ablation (n=36), or conventional ablation alone (n=71; FIRM-Blinded). Results Localized rotors or focal impulses were detected in 98 (97%) of 101 cases with sustained AF, each exhibiting 2.1±1.0 sources. The acute endpoint (AF termination or consistent slowing) was achieved in 86% of FIRM-guided versus 20% of FIRM-Blinded cases (p<0.001). FIRM ablation alone at the primary source terminated AF in 2.5 minutes (median; IQR 1.0–3.1). Total ablation time did not differ between groups (57.8±22.8 versus 52.1±17.8 minutes, p=0.16). During 273 days (median; IQR 132–681 days) after a single procedure, FIRM-Guided cases had higher freedom from AF (82.4% versus 44.9%; p<0.001) after a single procedure than FIRM-blinded cases with rigorous, often implanted, ECG monitoring. Adverse events did not differ between groups. CONCLUSIONS Localized electrical rotors and focal impulse sources are prevalent sustaining-mechanisms for human AF. FIRM ablation at patient-specific sources acutely terminated or slowed AF, and improved outcome. These results offer a novel mechanistic framework and treatment paradigm for AF. (ClinicalTrials.gov number, NCT01008722) PMID:22818076
Innovative Funding for Intercity Modes: A Casebook of State, Local, and Private Approaches
DOT National Transportation Integrated Search
1987-07-01
The document reviews non-Federal funding sources for intercity transportation services. The report examines the structure of intercity passenger and freight transportation services, focusing on bus, rail, and short-haul air. It explores public-privat...
NASA Astrophysics Data System (ADS)
Sakala, E.; Fourie, F.; Gomo, M.; Coetzee, H.
2018-01-01
In the last 20 years, the popular mineral systems approach has been used successfully for the exploration of various mineral commodities at various scales owing to its scientific soundness, cost effectiveness and simplicity in mapping the critical processes required for the formation of deposits. In the present study this approach was modified for the assessment of groundwater vulnerability. In terms of the modified approach, water drives the pollution migration processes, with various analogies having been derived from the mineral systems approach. The modified approach is illustrated here by the discussion of a case study of acid mine drainage (AMD) pollution in the Witbank, Ermelo and Highveld coalfields of the Mpumalanga and KwaZulu-Natal Provinces in South Africa. Many AMD cases have been reported in these provinces in recent years and are a cause of concern for local municipalities, mining and environmental agencies. In the Witbank, Ermelo and Highveld coalfields, several areas have been mined out while mining has not yet started in others, hence the need to identify groundwater regions prone to AMD pollution in order to avoid further impacts on the groundwater resources. A knowledge-based fuzzy expert system was built using vulnerability factors (energy sources, ligands sources, pollutant sources, transportation pathways and traps) to generate a groundwater vulnerability model of the coalfields. Highly vulnerable areas were identified in Witbank coalfield and the eastern part of the Ermelo coalfield which are characterised by the presence of AMD sources, good subsurface transport coupled with poor AMD pollution trapping properties. The results from the analysis indicate significant correlations between model values and both groundwater sulphate concentrations as well as pH. This shows that the proposed approach can indeed be used as an alternative to traditional methods of groundwater vulnerability assessment. The methodology only considers the AMD pollution attenuation and migration at a regional scale and does not account for local-scale sources of pollution and attenuation. Further research to refine the approach may include the incorporation of groundwater flow direction, rock-pollution reaction time, and temporal datasets for the future prediction of groundwater vulnerability. The approach may be applied to other coalfields to assess its robustness to changing hydrogeological conditions.
Treatment of internal sources in the finite-volume ELLAM
Healy, R.W.; ,; ,; ,; ,; ,
2000-01-01
The finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) is a mass-conservative approach for solving the advection-dispersion equation. The method has been shown to be accurate and efficient for solving advection-dominated problems of solute transport in ground water in 1, 2, and 3 dimensions. Previous implementations of FVELLAM have had difficulty in representing internal sources because the standard assumption of lowest order Raviart-Thomas velocity field does not hold for source cells. Therefore, tracking of particles within source cells is problematic. A new approach has been developed to account for internal sources in FVELLAM. It is assumed that the source is uniformly distributed across a grid cell and that instantaneous mixing takes place within the cell, such that concentration is uniform across the cell at any time. Sub-time steps are used in the time-integration scheme to track mass outflow from the edges of the source cell. This avoids the need for tracking within the source cell. We describe the new method and compare results for a test problem with a wide range of cell Peclet numbers.
A new phase-correlation-based iris matching for degraded images.
Krichen, Emine; Garcia-Salicetti, Sonia; Dorizzi, Bernadette
2009-08-01
In this paper, we present a new phase-correlation-based iris matching approach in order to deal with degradations in iris images due to unconstrained acquisition procedures. Our matching system is a fusion of global and local Gabor phase-correlation schemes. The main originality of our local approach is that we do not only consider the correlation peak amplitudes but also their locations in different regions of the images. Results on several degraded databases, namely, the CASIA-BIOSECURE and Iris Challenge Evaluation 2005 databases, show the improvement of our method compared to two available reference systems, Masek and Open Source for Iris (OSRIS), in verification mode.
NASA Astrophysics Data System (ADS)
Moon, N.; Kim, S.; Seo, J.; Lee, Y. J.
2017-12-01
Recently, the Korean government is focusing on solving air pollution problem such as fine particulate matter and ozone. Korea has high population density and concentrated industrial complex in its limited land space. For better air quality management, it is important to understand source and contribution relation to target pollutant. The air quality analysis representing the mutual contribution among the local regions enables to understand the substantive state of the air quality of a region in association with neighboring regions. Under this background, the source apportionment of PM10, PM2.5, O3, NO2, SO2 using WRF and CMAQ/BFM was analyzed over Korea and BFM was applied to mobile, area and point sources in each local government. The contribution rate from neighboring region showed different pattern for each pollutant. In case of primary pollutants such as NO2, SO2, local source contribution is dominant, on the other hand secondary pollutants case especially O3, contribution from neighboring region is higher than that from source region itself. Local source contribution to PM10 showed 20-25% and the contribution rate to O3 has big difference with different meteorological condition year after year. From this study, we tried to estimate the conversion rate between source (NOx, VOC, SO2, NH3, PMC, PM2.5, CO) and concentration (PM10, PM2.5, O3, NO2, SO2,) by regional group over Korea. The result can contribute to the decision-making process of important national planning related to large-scale industrial developments and energy supply policies (eg., operations of coal-fired power plants and diesel cars) and emission control plan, where many controversies and concerns are currently concentrated among local governments in Korea. With this kind of approach, various environmental and social problems related to air quality can also be identified early so that a sustainable and environmentally sound plan can be established by providing data infrastructures to be utilized by central government agencies, local governments, and even private sectors.
The energy and emissions footprint of water supply for Southern California
NASA Astrophysics Data System (ADS)
Fang, A. J.; Newell, Joshua P.; Cousins, Joshua J.
2015-11-01
Due to climate change and ongoing drought, California and much of the American West face critical water supply challenges. California’s water supply infrastructure sprawls for thousands of miles, from the Colorado River to the Sacramento Delta. Bringing water to growing urban centers in Southern California is especially energy intensive, pushing local utilities to balance water security with factors such as the cost and carbon footprint of the various supply sources. To enhance water security, cities are expanding efforts to increase local water supply. But do these local sources have a smaller carbon footprint than imported sources? To answer this question and others related to the urban water-energy nexus, this study uses spatially explicit life cycle assessment to estimate the energy and emissions intensity of water supply for two utilities in Southern California: Los Angeles Department of Water and Power, which serves Los Angeles, and the Inland Empire Utility Agency, which serves the San Bernardino region. This study differs from previous research in two significant ways: (1) emissions factors are based not on regional averages but on the specific electric utility and generation sources supplying energy throughout transport, treatment, and distribution phases of the water supply chain; (2) upstream (non-combustion) emissions associated with the energy sources are included. This approach reveals that in case of water supply to Los Angeles, local recycled water has a higher carbon footprint than water imported from the Colorado River. In addition, by excluding upstream emissions, the carbon footprint of water supply is potentially underestimated by up to 30%. These results have wide-ranging implications for how carbon footprints are traditionally calculated at local and regional levels. Reducing the emissions intensity of local water supply hinges on transitioning the energy used to treat and distribute water away from fossil fuel, sources such as coal.
A simple method for EEG guided transcranial electrical stimulation without models
NASA Astrophysics Data System (ADS)
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom
2016-06-01
Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
Soft-tissue and phase-contrast imaging at the Swiss Light Source
NASA Astrophysics Data System (ADS)
Schneider, Philipp; Mohan, Nishant; Stampanoni, Marco; Muller, Ralph
2004-05-01
Recent results show that bone vasculature is a major contributor to local tissue porosity, and therefore can be directly linked to the mechanical properties of bone tissue. With the advent of third generation synchrotron radiation (SR) sources, micro-computed tomography (μCT) with resolutions in the order of 1 μm and better has become feasible. This technique has been employed frequently to analyze trabecular architecture and local bone tissue properties, i.e. the hard or mineralized bone tissue. Nevertheless, less is known about the soft tissues in bone, mainly due to inadequate imaging capabilities. Here, we discuss three different methods and applications to visualize soft tissues. The first approach is referred to as negative imaging. In this case the material around the soft tissue provides the absorption contrast necessary for X-ray based tomography. Bone vasculature from two different mouse strains was investigated and compared qualitatively. Differences were observed in terms of local vessel number and vessel orientation. The second technique represents corrosion casting, which is principally adapted for imaging of vascular systems. The technique of corrosion casting has already been applied successfully at the Swiss Light Source. Using the technology we were able to show that pathological features reminiscent of Alzheimer"s disease could be distinguished in the brain vasculature of APP transgenic mice. The third technique discussed here is phase contrast imaging exploiting the high degree of coherence of third generation synchrotron light sources, which provide the necessary physical conditions for phase contrast. The in-line approach followed here for phase contrast retrieval is a modification of the Gerchberg-Saxton-Fienup type. Several measurements and theoretical thoughts concerning phase contrast imaging are presented, including mathematical phase retrieval. Although up-to-now only phase images have been computed, the approach is now ready to retrieve the phase for a large number of angular positions of the specimen allowing application of holotomography, which is the three-dimensional reconstruction of phase images.
Wheat landraces: A mini review
USDA-ARS?s Scientific Manuscript database
Farmers developed and utilized diverse wheat landraces to meet the complexity of a multitude of spatio-temporal, agro-ecological systems and to provide reliable sustenance and a sustainable food source to local communities. The genetic structure of wheat landraces is an evolutionary approach to surv...
NASA Astrophysics Data System (ADS)
Chandra, Rohit; Balasingham, Ilangko
2015-05-01
Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.
Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel
2016-05-01
Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.
Elastic interactions between two-dimensional geometric defects
NASA Astrophysics Data System (ADS)
Moshe, Michael; Sharon, Eran; Kupferman, Raz
2015-12-01
In this paper, we introduce a methodology applicable to a wide range of localized two-dimensional sources of stress. This methodology is based on a geometric formulation of elasticity. Localized sources of stress are viewed as singular defects—point charges of the curvature associated with a reference metric. The stress field in the presence of defects can be solved using a scalar stress function that generalizes the classical Airy stress function to the case of materials with nontrivial geometry. This approach allows the calculation of interaction energies between various types of defects. We apply our methodology to two physical systems: shear-induced failure of amorphous materials and the mechanical interaction between contracting cells.
Liu, Baoshuang; Li, Tingkun; Yang, Jiamei; Wu, Jianhui; Wang, Jiao; Gao, Jixin; Bi, Xiaohui; Feng, Yinchang; Zhang, Yufen; Yang, Haihang
2017-04-01
A novel approach was developed to estimate regional contributions to ambient PM 2.5 in Haikou, China. In this paper, the investigation was divided into two main steps. The first step: analysing the characteristics of the chemical compositions of ambient PM 2.5 , as well as the source profiles, and then conducting source apportionments by using the CMB and CMB-Iteration models. The second step: the development of estimation approaches for regional contributions in terms of local features of Haikou and the results of source apportionment, and estimating regional contributions to ambient PM 2.5 in Haikou by this new approach. The results indicate that secondary sulphate, resuspended dust and vehicle exhaust were the major sources of ambient PM 2.5 in Haikou, contributing 9.9-21.4%, 10.1-19.0% and 10.5-20.2%, respectively. Regional contributions to ambient PM 2.5 in Haikou in spring, autumn and winter were 22.5%, 11.6% and 32.5%, respectively. The regional contribution in summer was assumed to be zero according to the better atmospheric quality and assumptions of this new estimation approach. The higher regional contribution in winter might be mainly attributable to the transport of polluted air originating in mainland China, especially from the north, where coal is burned for heating in winter. Copyright © 2017 Elsevier Ltd. All rights reserved.
As above, so below? Towards understanding inverse models in BCI
NASA Astrophysics Data System (ADS)
Lindgren, Jussi T.
2018-02-01
Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.
NASA Astrophysics Data System (ADS)
Lovette, J. P.; Duncan, J. M.; Band, L. E.
2016-12-01
Watershed management requires information on the hydrologic impacts of local to regional land use, land cover and infrastructure conditions. Management of runoff volumes, storm flows, and water quality can benefit from large scale, "top-down" screening tools, using readily available information, as well as more detailed, "bottom-up" process-based models that explicitly track local runoff production and routing from sources to receiving water bodies. Regional scale data, available nationwide through the NHD+, and top-down models based on aggregated catchment information provide useful tools for estimating regional patterns of peak flows, volumes and nutrient loads at the catchment level. Management impacts can be estimated with these models, but have limited ability to resolve impacts beyond simple changes to land cover proportions. Alternatively, distributed process-based models provide more flexibility in modeling management impacts by resolving spatial patterns of nutrient source, runoff generation, and uptake. This bottom-up approach can incorporate explicit patterns of land cover, drainage connectivity, and vegetation extent, but are typically applied over smaller areas. Here, we first model peak flood flows and nitrogen loads across North Carolina's 70,000 NHD+ catchments using USGS regional streamflow regression equations and the SPARROW model. We also estimate management impact by altering aggregated sources in each of these models. To address the missing spatial implications of the top-down approach, we further explore the demand for riparian buffers as a management strategy, simulating the accumulation of nutrient sources along flow paths and the potential mitigation of these sources through forested buffers. We use the Regional Hydro-Ecological Simulation System (RHESSys) to model changes across several basins in North Carolina's Piedmont and Blue Ridge regions, ranging in size from 15 - 1,130 km2. The two approaches provide a complementary set of tools for large area screening, followed by smaller, more process based assessment and design tools.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Localization of marine mammals near Hawaii using an acoustic propagation model
NASA Astrophysics Data System (ADS)
Tiemann, Christopher O.; Porter, Michael B.; Frazer, L. Neil
2004-06-01
Humpback whale songs were recorded on six widely spaced receivers of the Pacific Missile Range Facility (PMRF) hydrophone network near Hawaii during March of 2001. These recordings were used to test a new approach to localizing the whales that exploits the time-difference of arrival (time lag) of their calls as measured between receiver pairs in the PMRF network. The usual technique for estimating source position uses the intersection of hyperbolic curves of constant time lag, but a drawback of this approach is its assumption of a constant wave speed and straight-line propagation to associate acoustic travel time with range. In contrast to hyperbolic fixing, the algorithm described here uses an acoustic propagation model to account for waveguide and multipath effects when estimating travel time from hypothesized source positions. A comparison between predicted and measured time lags forms an ambiguity surface, or visual representation of the most probable whale position in a horizontal plane around the array. This is an important benefit because it allows for automated peak extraction to provide a location estimate. Examples of whale localizations using real and simulated data in algorithms of increasing complexity are provided.
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Moreau, Frédérique; Mathé, Pierre-Etienne; Hermitte, Daniel; Michel, Jean-Marie
2007-10-01
We have previously developed a method for characterizing and localizing `homogeneous' buried sources, from the measure of potential anomalies at a fixed height above ground (magnetic, electric and gravity). This method is based on potential theory and uses the properties of the Poisson kernel (real by definition) and the continuous wavelet theory. Here, we relax the assumption on sources and introduce a method that we call the `multiscale tomography'. Our approach is based on the harmonic extension of the observed magnetic field to produce a complex source by use of a complex Poisson kernel solution of the Laplace equation for complex potential field. A phase and modulus are defined. We show that the phase provides additional information on the total magnetic inclination and the structure of sources, while the modulus allows us to characterize its spatial location, depth and `effective degree'. This method is compared to the `complex dipolar tomography', extension of the Patella method that we previously developed. We applied both methods and a classical electrical resistivity tomography to detect and localize buried archaeological structures like antique ovens from magnetic measurements on the Fox-Amphoux site (France). The estimates are then compared with the results of excavations.
Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales
NASA Astrophysics Data System (ADS)
Madsen, P. T.; Wahlberg, M.
2007-08-01
Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.
Making Sense of Clinical Practice: Order Set Design Strategies in CPOE
Novak, Laurie L.
2007-01-01
A case study was conducted during the customization phase of a commercial CPOE system at a multi-hospital, academic health system. The study focused on the development of order sets. Three distinct approaches to order set development were observed: Empirical, Local Consensus and Departmental. The three approaches are first described and then examined using the framework of sensemaking. Different approaches to sensemaking in the context of order set development reflect variations in sources of knowledge related to the standardization of care. PMID:18693900
A Mobile Sensing Approach for Regional Surveillance of ...
This paper discusses plan-path and automated concepts for inspection to localize individual leaks and to quantify their source rates over regions with active oil and gas well pads and pipelines. This is a peer reviewed journal article submission that advances GMAP OTM 33 mobile measurement and fixed place fence line NGAM topics. This paper is focused on methods development and does not contain new or unpublished source emissions results regarding oil and gas.
NASA Astrophysics Data System (ADS)
Tringali, C.; Re, V.; Siciliano, G.; Chkir, N.; Tuci, C.; Zouari, K.
2017-08-01
Sustainable groundwater management strategies in water-scarce countries need to guide future decision-making processes pragmatically, by simultaneously considering local needs, environmental problems and economic development. The socio-hydrogeological approach named `Bir Al-Nas' has been tested in the Grombalia region (Cap Bon Peninsula, Tunisia), to evaluate the effectiveness of complementing hydrogeochemical and hydrogeological investigations with the social dimension of the issue at stake (which, in this case, is the identification of groundwater pollution sources). Within this approach, the social appraisal, performed through social network analysis and public engagement of water end-users, allowed hydrogeologists to get acquainted with the institutional dimension of local groundwater management, identifying issues, potential gaps (such as weak knowledge transfer among concerned stakeholders), and the key actors likely to support the implementation of the new science-based management practices resulting from the ongoing hydrogeological investigation. Results, hence, go beyond the specific relevance for the Grombaila basin, showing the effectiveness of the proposed approach and the importance of including social assessment in any given hydrogeological research aimed at supporting local development through groundwater protection measures.
NASA Astrophysics Data System (ADS)
Bradley, Eliza Swan
Methane is an important greenhouse gas for which uncertainty in local emission strengths necessitates improved source characterizations. Although CH4 plume mapping did not motivate the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) design and municipal air quality monitoring stations were not intended for studying marine geological seepage, these assets have capabilities that can make them viable for studying concentrated (high flux, highly heterogeneous) CH4 sources, such as the Coal Oil Point (COP) seep field (˜0.015 Tg CH4 yr-1) offshore Santa Barbara, California. Hourly total hydrocarbon (THC) data, spanning 1990 to 2008 from an air pollution station located near COP, were analyzed and showed geologic CH4 emissions as the dominant local source. A band ratio approach was developed and applied to high glint AVIRIS data over COP, resulting in local-scale mapping of natural atmospheric CH4 plumes. A Cluster-Tuned Matched Filter (CTMF) technique was applied to Gulf of Mexico AVIRIS data to detect CH4 venting from offshore platforms. Review of 744 platform-centered CTMF subsets was facilitated through a flexible PHP-based web portal. This dissertation demonstrates the value of investigating municipal air quality data and imaging spectrometry for gathering insight into concentrated methane source emissions and highlights how flexible web-based solutions can help facilitate remote sensing research.
NASA Astrophysics Data System (ADS)
Nisyawati, Aini, R. N.; Silalahi, M.; Purba, E. C.; Avifah, N.
2017-07-01
Research on the local knowledge of food plants used by Karo ethnic in the Semangat Gunung Village, North Sumatra has been done. The aim of this study is to reveal plant species that used by the people of Karo ethnic as food. We used the ethnobotanical approach which included open-ended, semi-structural interview, and exploration method. One eldervillage, 2 traditional healers, and 30 respondents have been selected as sources of information. Descriptive statistics have been used to analyze the gathered data. A number of 109 species which belong to 83 genus and 45 families known to be used as food sources by Karo people. Four families have the highest number of food plant species, which are Solanaceae (8 species), Poaceae (7 species), Fabaceae (6 species), and Zingiberaceae (6 species). All of those families are found in the village, both wild and Cultivated. Solanaceae is used as source of fruits, vegetables, and spices. Poaceae is used as the source of the staple food, alternative food sources, snacks, spices, and traditional foods. Fabaceae is used as source of vegetables and traditional foods. Zingiberaceae is used as source of spices.
Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101
A Multiple-Tracer Approach for Identifying Sewage Sources to an Urban Stream System
Hyer, Kenneth Edward
2007-01-01
The presence of human-derived fecal coliform bacteria (sewage) in streams and rivers is recognized as a human health hazard. The source of these human-derived bacteria, however, is often difficult to identify and eliminate, because sewage can be delivered to streams through a variety of mechanisms, such as leaking sanitary sewers or private lateral lines, cross-connected pipes, straight pipes, sewer-line overflows, illicit dumping of septic waste, and vagrancy. A multiple-tracer study was conducted to identify site-specific sources of sewage in Accotink Creek, an urban stream in Fairfax County, Virginia, that is listed on the Commonwealth's priority list of impaired streams for violations of the fecal coliform bacteria standard. Beyond developing this multiple-tracer approach for locating sources of sewage inputs to Accotink Creek, the second objective of the study was to demonstrate how the multiple-tracer approach can be applied to other streams affected by sewage sources. The tracers used in this study were separated into indicator tracers, which are relatively simple and inexpensive to apply, and confirmatory tracers, which are relatively difficult and expensive to analyze. Indicator tracers include fecal coliform bacteria, surfactants, boron, chloride, chloride/bromide ratio, specific conductance, dissolved oxygen, turbidity, and water temperature. Confirmatory tracers include 13 organic compounds that are associated with human waste, including caffeine, cotinine, triclosan, a number of detergent metabolites, several fragrances, and several plasticizers. To identify sources of sewage to Accotink Creek, a detailed investigation of the Accotink Creek main channel, tributaries, and flowing storm drains was undertaken from 2001 to 2004. Sampling was conducted in a series of eight synoptic sampling events, each of which began at the most downstream site and extended upstream through the watershed and into the headwaters of each tributary. Using the synoptic sampling approach, 149 sites were sampled at least one time for indicator tracers; 52 of these sites also were sampled for confirmatory tracers at least one time. Through the analysis of multiple-tracer levels in the synoptic samples, three major sewage sources to the Accotink Creek stream network were identified, and several other minor sewage sources to the Accotink Creek system likely deserve additional investigation. Near the end of the synoptic sampling activities, three additional sampling methods were used to gain better understanding of the potential for sewage sources to the watershed. These additional sampling methods included optical brightener monitoring, intensive stream sampling using automated samplers, and additional sampling of several storm-drain networks. The samples obtained by these methods provided further understanding of possible sewage sources to the streams and a better understanding of the variability in the tracer concentrations at a given sampling site. Collectively, these additional sampling methods were a valuable complement to the synoptic sampling approach that was used for the bulk of this study. The study results provide an approach for local authorities to use in applying a relatively simple and inexpensive collection of tracers to locate sewage sources to streams. Although this multiple-tracer approach is effective in detecting sewage sources to streams, additional research is needed to better detect extremely low-volume sewage sources and better enable local authorities to identify the specific sources of the sewage once it is detected in a stream reach.
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
Lai, Chen-Yen; Chien, Chih-Chun
2016-01-01
While batteries offer electronic source and sink for electronic devices, atomic analogues of source and sink and their theoretical descriptions have been a challenge in cold-atom systems. Here we consider dynamically emerged local potentials as controllable source and sink for bosonic atoms. Although a sink potential can collect bosons in equilibrium and indicate its usefulness in the adiabatic limit, sudden switching of the potential exhibits low effectiveness in pushing bosons into it. This is due to conservation of energy and particle in isolated systems such as cold atoms. By varying the potential depth and interaction strength, the systems can further exhibit averse response, where a deeper emerged potential attracts less bosonic atoms into it. To explore possibilities for improving the effectiveness, we investigate what types of system-environment coupling can help bring bosons into a dynamically emerged sink, and a Lindblad operator corresponding to local cooling is found to serve the purpose. PMID:27849034
Xu, Shenlai
2009-04-01
A landscape index LI is proposed to evaluate the intensity of the daytime surface urban heat island (SUHI) effect at a local scale. Three aspects of this landscape index are crucial: the source landscape, the sink landscape, and the contribution of source and sink landscapes to the intensity of the SUHI. Source and sink landscape types are identified using the thermo-band of Landsat 7 with a spatial resolution of 60 m, along with appropriate threshold values for the Normalized Difference Vegetation Index, Modified Normalized Difference Water Index, and Normalized Difference Built-up Index. The landscape index was defined as the ratio of the contributions of the source and sink landscapes to the intensity of the SUHI. The intensity of the daytime SUHI is assessed with the help of the landscape index. Our analysis indicates the landscape index can be used to evaluate and compare the intensity of the daytime SUHI for different areas.
Improving Empirical Approaches to Estimating Local Greenhouse Gas Emissions
NASA Astrophysics Data System (ADS)
Blackhurst, M.; Azevedo, I. L.; Lattanzi, A.
2016-12-01
Evidence increasingly indicates our changing climate will have significant global impacts on public health, economies, and ecosystems. As a result, local governments have become increasingly interested in climate change mitigation. In the U.S., cities and counties representing nearly 15% of the domestic population plan to reduce 300 million metric tons of greenhouse gases over the next 40 years (or approximately 1 ton per capita). Local governments estimate greenhouse gas emissions to establish greenhouse gas mitigation goals and select supporting mitigation measures. However, current practices produce greenhouse gas estimates - also known as a "greenhouse gas inventory " - of empirical quality often insufficient for robust mitigation decision making. Namely, current mitigation planning uses sporadic, annual, and deterministic estimates disaggregated by broad end use sector, obscuring sources of emissions uncertainty, variability, and exogeneity that influence mitigation opportunities. As part of AGU's Thriving Earth Exchange, Ari Lattanzi of City of Pittsburgh, PA recently partnered with Dr. Inez Lima Azevedo (Carnegie Mellon University) and Dr. Michael Blackhurst (University of Pittsburgh) to improve the empirical approach to characterizing Pittsburgh's greenhouse gas emissions. The project will produce first-order estimates of the underlying sources of uncertainty, variability, and exogeneity influencing Pittsburgh's greenhouse gases and discuss implications of mitigation decision making. The results of the project will enable local governments to collect more robust greenhouse gas inventories to better support their mitigation goals and improve measurement and verification efforts.
A simple method for EEG guided transcranial electrical stimulation without models.
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom
2016-06-01
There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Models, Measurements, and Local Decisions: Assessing and ...
This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
NASA Astrophysics Data System (ADS)
Park, Dubok; Han, David K.; Ko, Hanseok
2017-05-01
Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.
Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van
2017-08-01
Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
Mining and harnessing natural variation - a little MAGIC
USDA-ARS?s Scientific Manuscript database
As has been frequently noted, exotic germplasm ( lines unadapted to local conditions) can be sources of very beneficial genes. The trouble is that it's often difficult to identify these genes. We propose an approach in which mutations can be used to uncover useful variants of natural genes....
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Adjoint-tomography for a Local Surface Structure: Methodology and a Blind Test
NASA Astrophysics Data System (ADS)
Kubina, Filip; Michlik, Filip; Moczo, Peter; Kristek, Jozef; Stripajova, Svetlana
2017-04-01
We have developed a multiscale full-waveform adjoint-tomography method for local surface sedimentary structures with complicated interference wavefields. The local surface sedimentary basins and valleys are often responsible for anomalous earthquake ground motions and corresponding damage in earthquakes. In many cases only relatively small number of records of a few local earthquakes is available for a site of interest. Consequently, prediction of earthquake ground motion at the site has to include numerical modeling for a realistic model of the local structure. Though limited, the information about the local structure encoded in the records is important and irreplaceable. It is therefore reasonable to have a method capable of using the limited information in records for improving a model of the local structure. A local surface structure and its interference wavefield require a specific multiscale approach. In order to verify our inversion method, we performed a blind test. We obtained synthetic seismograms at 8 receivers for 2 local sources, complete description of the sources, positions of the receivers and material parameters of the bedrock. We considered the simplest possible starting model - a homogeneous halfspace made of the bedrock. Using our inversion method we obtained an inverted model. Given the starting model, synthetic seismograms simulated for the inverted model are surprisingly close to the synthetic seismograms simulated for the true structure in the target frequency range up to 4.5 Hz. We quantify the level of agreement between the true and inverted seismograms using the L2 and time-frequency misfits, and, more importantly for earthquake-engineering applications, also using the goodness-of-fit criteria based on the earthquake-engineering characteristics of earthquake ground motion. We also verified the inverted model for other source-receiver configurations not used in the inversion.
Regulatory and Non-regulatory Responses to Hydraulic Fracturing in Local Communities
NASA Astrophysics Data System (ADS)
Phartiyal, P.
2015-12-01
The practice of extracting oil and gas from tight rock formations using advances in technology, such as hydraulic fracturing and directional drilling, has expanded exponentially in states and localities across the country. As the scientific data collection and analysis catches up on the many potential impacts of this unconventional oil and gas development, communities are turning to their local officials to make decisions on whether and how fracking should proceed. While most regulatory authority on the issue rests with the state agencies, local officials have experimented with a wide range of regulatory, non-regulatory, and fiscal tools to manage the impacts of fracking. These impacts can occur on the local air, water, seismicity, soil, roads, schools, and affect residents, on-site workers, emergency and social services. Local officials' approaches are often influenced by their prior experience with minerals extraction in their localities. The speaker will present examples of the kinds of information sources, tools and approaches communities across the country are using, from noise barriers to setback requirements to information sharing in order to be able to balance the promise and perils of oil and gas development in their jurisdictions.
Multiscale modeling of lithium ion batteries: thermal aspects
Zausch, Jochen
2015-01-01
Summary The thermal behavior of lithium ion batteries has a huge impact on their lifetime and the initiation of degradation processes. The development of hot spots or large local overpotentials leading, e.g., to lithium metal deposition depends on material properties as well as on the nano- und microstructure of the electrodes. In recent years a theoretical structure emerges, which opens the possibility to establish a systematic modeling strategy from atomistic to continuum scale to capture and couple the relevant phenomena on each scale. We outline the building blocks for such a systematic approach and discuss in detail a rigorous approach for the continuum scale based on rational thermodynamics and homogenization theories. Our focus is on the development of a systematic thermodynamically consistent theory for thermal phenomena in batteries at the microstructure scale and at the cell scale. We discuss the importance of carefully defining the continuum fields for being able to compare seemingly different phenomenological theories and for obtaining rules to determine unknown parameters of the theory by experiments or lower-scale theories. The resulting continuum models for the microscopic and the cell scale are numerically solved in full 3D resolution. The complex very localized distributions of heat sources in a microstructure of a battery and the problems of mapping these localized sources on an averaged porous electrode model are discussed by comparing the detailed 3D microstructure-resolved simulations of the heat distribution with the result of the upscaled porous electrode model. It is shown, that not all heat sources that exist on the microstructure scale are represented in the averaged theory due to subtle cancellation effects of interface and bulk heat sources. Nevertheless, we find that in special cases the averaged thermal behavior can be captured very well by porous electrode theory. PMID:25977870
PMF5.0 vs. CMB8.2: An inter-comparison study based on the new European SPECIEUROPE database
NASA Astrophysics Data System (ADS)
Bove, Maria Chiara; Massabò, Dario; Prati, Paolo
2018-03-01
Receptor Models are tools widely adopted in source apportionment studies. We describe here an experiment in which we integrated two different approaches, i.e. Positive Matrix Factorization (PMF) and Chemical Mass Balance (CMB) to apportion a set of PM10 (i.e. Particulate Matter with aerodynamic diameter lower than 10 μm) concentration values. The study was performed in the city of Genoa (Italy): a sampling campaign was carried out collecting daily PM10 samples for about two months in an urban background site. PM10 was collected on Quartz fiber filters by a low-volume sampler. A quite complete speciation of PM samples was obtained via Energy Dispersive-X Ray Fluorescence (ED-XRF, for elements), Ionic Chromatography (IC, for major ions and levoglucosan), thermo-optical Analysis (TOT, for organic and elemental carbon). The chemical analyses provided the input database for source apportionment by both PMF and CMB. Source profiles were directly calculated from the input data by PMF while in the CMB runs they were first calculated by averaging the profiles of similar sources collected in the European database SPECIEUROPE. Differences between the two receptor models emerged in particular with PM10 sources linked to very local processes. For this reason, PMF source profiles were adopted in refined CMB runs thus testing a new hybrid approach. Finally, PMF and the "tuned" CMB showed a better agreement even if some discrepancies could not completely been resolved. In this work, we compared the results coming from the last available PMF and CMB versions applied on a set of PM10 samples. Input profiles used in CMB analysis were obtained by averaging the profiles of the new European SPECIEUROPE database. The main differences between PMF and CMB results were linked to very local processes: we obtained the best solution by integrating the two different approaches with the implementation of some output PMF profiles to CMB runs.
Biological data integration: wrapping data and tools.
Lacroix, Zoé
2002-06-01
Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Synchronization Tomography: Modeling and Exploring Complex Brain Dynamics
NASA Astrophysics Data System (ADS)
Fieseler, Thomas
2002-03-01
Phase synchronization (PS) plays an important role both under physiological and pathological conditions. With standard averaging techniques of MEG data, it is difficult to reliably detect cortico-cortical and cortico-muscular PS processes that are not time-locked to an external stimulus. For this reason, novel synchronization analysis techniques were developed and directly applied to MEG signals. Of course, due to the lack of an inverse modeling (i.e. source localization), the spatial resolution of this approach was limited. To detect and localize cerebral PS, we here present the synchronization tomography (ST): For this, we first estimate the cerebral current source density by means of the magnetic field tomography (MFT). We then apply the single-run PS analysis to the current source density in each voxel of the reconstruction space. In this way we study simulated PS, voxel by voxel in order to determine the spatio-temporal resolution of the ST. To this end different generators of ongoing rhythmic cerebral activity are simulated by current dipoles at different locations and directions, which are modeled by slightly detuned chaotic oscillators. MEG signals for these generators are simulated for a spherical head model and a whole-head MEG system. MFT current density solutions are calculated from these simulated signals within a hemispherical source space. We compare the spatial resolution of the ST with that of the MFT. Our results show that adjacent sources which are indistinguishable for the MFT, can nevertheless be separated with the ST, provided they are not strongly phase synchronized. This clearly demonstrates the potential of combining spatial information (i.e. source localization) with temporal information for the anatomical localization of phase synchronization in the human brain.
Feasibility of approaches combining sensor and source features in brain-computer interface.
Ahn, Minkyu; Hong, Jun Hee; Jun, Sung Chan
2012-02-15
Brain-computer interface (BCI) provides a new channel for communication between brain and computers through brain signals. Cost-effective EEG provides good temporal resolution, but its spatial resolution is poor and sensor information is blurred by inherent noise. To overcome these issues, spatial filtering and feature extraction techniques have been developed. Source imaging, transformation of sensor signals into the source space through source localizer, has gained attention as a new approach for BCI. It has been reported that the source imaging yields some improvement of BCI performance. However, there exists no thorough investigation on how source imaging information overlaps with, and is complementary to, sensor information. Information (visible information) from the source space may overlap as well as be exclusive to information from the sensor space is hypothesized. Therefore, we can extract more information from the sensor and source spaces if our hypothesis is true, thereby contributing to more accurate BCI systems. In this work, features from each space (sensor or source), and two strategies combining sensor and source features are assessed. The information distribution among the sensor, source, and combined spaces is discussed through a Venn diagram for 18 motor imagery datasets. Additional 5 motor imagery datasets from the BCI Competition III site were examined. The results showed that the addition of source information yielded about 3.8% classification improvement for 18 motor imagery datasets and showed an average accuracy of 75.56% for BCI Competition data. Our proposed approach is promising, and improved performance may be possible with better head model. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Carnell, E. J.; Misselbrook, T. H.; Dore, A. J.; Sutton, M. A.; Dragosits, U.
2017-09-01
The effects of atmospheric nitrogen (N) deposition are evident in terrestrial ecosystems worldwide, with eutrophication and acidification leading to significant changes in species composition. Substantial reductions in N deposition from nitrogen oxides emissions have been achieved in recent decades. By contrast, ammonia (NH3) emissions from agriculture have not decreased substantially and are typically highly spatially variable, making efficient mitigation challenging. One solution is to target NH3 mitigation measures spatially in source landscapes to maximize the benefits for nature conservation. The paper develops an approach to link national scale data and detailed local data to help identify suitable measures for spatial targeting of local sources near designated Special Areas of Conservation (SACs). The methodology combines high-resolution national data on emissions, deposition and source attribution with local data on agricultural management and site conditions. Application of the methodology for the full set of 240 SACs in England found that agriculture contributes ∼45 % of total N deposition. Activities associated with cattle farming represented 54 % of agricultural NH3 emissions within 2 km of the SACs, making them a major contributor to local N deposition, followed by mineral fertiliser application (21 %). Incorporation of local information on agricultural management practices at seven example SACs provided the means to correct outcomes compared with national-scale emission factors. The outcomes show how national scale datasets can provide information on N deposition threats at landscape to national scales, while local-scale information helps to understand the feasibility of mitigation measures, including the impact of detailed spatial targeting on N deposition rates to designated sites.
Stoeckel, Donald M; Stelzer, Erin A; Stogner, Robert W; Mau, David P
2011-05-01
Protocols for microbial source tracking of fecal contamination generally are able to identify when a source of contamination is present, but thus far have been unable to evaluate what portion of fecal-indicator bacteria (FIB) came from various sources. A mathematical approach to estimate relative amounts of FIB, such as Escherichia coli, from various sources based on the concentration and distribution of microbial source tracking markers in feces was developed. The approach was tested using dilute fecal suspensions, then applied as part of an analytical suite to a contaminated headwater stream in the Rocky Mountains (Upper Fountain Creek, Colorado). In one single-source fecal suspension, a source that was not present could not be excluded because of incomplete marker specificity; however, human and ruminant sources were detected whenever they were present. In the mixed-feces suspension (pet and human), the minority contributor (human) was detected at a concentration low enough to preclude human contamination as the dominant source of E. coli to the sample. Without the semi-quantitative approach described, simple detects of human-associated marker in stream samples would have provided inaccurate evidence that human contamination was a major source of E. coli to the stream. In samples from Upper Fountain Creek the pattern of E. coli, general and host-associated microbial source tracking markers, nutrients, and wastewater-associated chemical detections--augmented with local observations and land-use patterns--indicated that, contrary to expectations, birds rather than humans or ruminants were the predominant source of fecal contamination to Upper Fountain Creek. This new approach to E. coli allocation, validated by a controlled study and tested by application in a relatively simple setting, represents a widely applicable step forward in the field of microbial source tracking of fecal contamination. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sanderlin, J.S.; Waser, P.M.; Hines, J.E.; Nichols, J.D.
2012-01-01
Metapopulation ecology has historically been rich in theory, yet analytical approaches for inferring demographic relationships among local populations have been few. We show how reverse-time multi-state capture-recapture models can be used to estimate the importance of local recruitment and interpopulation dispersal to metapopulation growth. We use 'contribution metrics' to infer demographic connectedness among eight local populations of banner-tailed kangaroo rats, to assess their demographic closure, and to investigate sources of variation in these contributions. Using a 7 year dataset, we show that: (i) local populations are relatively independent demographically, and contributions to local population growth via dispersal within the system decline with distance; (ii) growth contributions via local survival and recruitment are greater for adults than juveniles, while contributions involving dispersal are greater for juveniles; (iii) central populations rely more on local recruitment and survival than peripheral populations; (iv) contributions involving dispersal are not clearly related to overall metapopulation density; and (v) estimated contributions from outside the system are unexpectedly large. Our analytical framework can classify metapopulations on a continuum between demographic independence and panmixia, detect hidden population growth contributions, and make inference about other population linkage forms, including rescue effects and source-sink structures. Finally, we discuss differences between demographic and genetic population linkage patterns for our system. ?? 2011 The Royal Society.
Dorsolateral Frontal Lobe Epilepsy
Lee, Ricky W.; Worrell, Greg A.
2012-01-01
Dorsolateral frontal lobe seizures often present as a diagnostic challenge. The diverse semiologies may not produce lateralizing or localizing signs, and can appear bizarre and suggest psychogenic events. Unfortunately, scalp EEG and MRI are often unsatisfactory. It is not uncommon that these traditional diagnostic studies are either unhelpful or even misleading. In some cases SPECT and PET imaging can be an effective tool to identify the origin of seizures. However, these techniques and other emerging techniques all have limitations, and new approaches are needed to improve source localization. PMID:23027094
Low-Cost Deposition Methods for Transparent Thin-Film Transistors
2003-09-26
theoretical limit is estimated to be ∼10 cm2/V s. [9] The largest organic TFT mobility reported is 2.7 cm2/V s for pentacene which is approaching the...refractory materials require the use of an electron beam. A directed electron beam is capable of locally heating source material to extremely high...Haboeck, M. Stassburg, M. Strassburg, G. Kaczmarczyk, A. Hoffman, and C. Thomsen, “Nitrogen-related local vibrational modes in ZnO:N,” Appl. Phys
NASA Astrophysics Data System (ADS)
Lee, Gang-San; Kim, Pyung-Rae; Han, Young-Ji; Holsen, Thomas M.; Seo, Yong-Seok; Yi, Seung-Muk
2016-03-01
As a global pollutant, mercury (Hg) is of particular concern in East Asia, where anthropogenic emissions are the largest. In this study, speciated Hg concentrations were measured on Yongheung Island, the westernmost island in Korea, located between China and the Korean mainland to identify the importance of local and regional Hg sources. Various tools including correlations with other pollutants, conditional probability function, and back-trajectory-based analysis consistently indicated that Korean sources were important for gaseous oxidized mercury (GOM) whereas, for total gaseous mercury (TGM) and particulate bound mercury (PBM), regional transport was also important. A trajectory cluster based approach, considering both Hg concentration and the fraction of time each cluster was impacting the site, was developed to quantify the effect of Korean sources and out-of-Korean sources. This analysis suggests that contributions from out-of-Korean sources were similar to Korean sources for TGM whereas Korean sources contributed slightly more to the concentration variations of GOM and PBM compared to out-of-Korean sources. The ratio of GOM/PBM decreased when the site was impacted by regional transport, suggesting that this ratio may be a useful tool for identifying the relative significance of local sources vs. regional transport. The secondary formation of PBM through gas-particle partitioning with GOM was found to be important at low temperatures and high relative humidity.
Development challenges for Low Temperature Plasma Sources ``from Idea to Prototype''
NASA Astrophysics Data System (ADS)
Gerling, T.; Baudler, J.-S.; Horn, S.; Schmidt, M.; Weltmann, K.-D.
2015-09-01
While plasma medicine is a well-motivated and intensively investigated topic, the requirements on the plasma sources change for individual applications. For example in dermatology, a large scale treatment is favored, while in dentistry, a localized application of plasma sources is required. Meanwhile, plasma source development is based on feasibility and not on the application. When a source is developed, it is usually motivated towards an application instead of considering an application and designing a plasma source to fit its needs. Each approach has its advantage and can lead to an advance in the field. With this contribution, we will present an approach from idea to prototype and show challenges in the plasma source development. For example, the consideration of legal regulations, adaption of the plasma source for a specific field of application and the interplay of gas flow dynamics with electrical field distribution. The solution was developed within several iterations to optimize it for different requirements. The obstacles that occurred during the development process will be highlighted and discussed. Afterwards the final source is characterized for a potential medical application and compared directly with a plasma source certified as a medical product. Acknowledging grants: AU 11 038; ESF/IV-BM-B35-0010/13.
High-Resolution Air Pollution Mapping with Google Street View Cars: Exploiting Big Data.
Apte, Joshua S; Messier, Kyle P; Gani, Shahzad; Brauer, Michael; Kirchstetter, Thomas W; Lunden, Melissa M; Marshall, Julian D; Portier, Christopher J; Vermeulen, Roel C H; Hamburg, Steven P
2017-06-20
Air pollution affects billions of people worldwide, yet ambient pollution measurements are limited for much of the world. Urban air pollution concentrations vary sharply over short distances (≪1 km) owing to unevenly distributed emission sources, dilution, and physicochemical transformations. Accordingly, even where present, conventional fixed-site pollution monitoring methods lack the spatial resolution needed to characterize heterogeneous human exposures and localized pollution hotspots. Here, we demonstrate a measurement approach to reveal urban air pollution patterns at 4-5 orders of magnitude greater spatial precision than possible with current central-site ambient monitoring. We equipped Google Street View vehicles with a fast-response pollution measurement platform and repeatedly sampled every street in a 30-km 2 area of Oakland, CA, developing the largest urban air quality data set of its type. Resulting maps of annual daytime NO, NO 2 , and black carbon at 30 m-scale reveal stable, persistent pollution patterns with surprisingly sharp small-scale variability attributable to local sources, up to 5-8× within individual city blocks. Since local variation in air quality profoundly impacts public health and environmental equity, our results have important implications for how air pollution is measured and managed. If validated elsewhere, this readily scalable measurement approach could address major air quality data gaps worldwide.
A new approach to depict bone surfaces in finger imaging using photoacoustic tomography
NASA Astrophysics Data System (ADS)
Biswas, S. K.; van Es, P.; Steenbergen, W.; Manohar, S.
2015-03-01
Imaging the vasculature close around the finger joints is of interest in the field of rheumatology. Locally increased vasculature in the synovial membrane of these joints can be a marker for rheumatoid arthritis. In previous work we showed that part of the photoacoustically induced ultrasound from the epidermis reflects on the bone surface within the finger. These reflected signals could be wrongly interpreted as new photoacoustic sources. In this work we show that a conventional ultrasound reconstruction algorithm, that considers the skin as a collection of ultrasound transmitters and the PA tomography probe as the detector array, can be used to delineate bone surfaces of a finger. This can in the future assist in the localization of the joint gaps. This can provide us with a landmark to localize the region of the inflamed synovial membrane. We test the approach on finger mimicking phantoms.
NASA Astrophysics Data System (ADS)
Yang, X. F.; Liu, H.; Man, H. Y.; He, K. B.
2014-06-01
Mobile source emission inventories serve as critical input for atmospheric chemical transport models, which are used to simulate air quality and understand the role of mobile source emissions. The significance of mobile sources is even more important in China because the country has the largest vehicle population in the world, and that population continues to grow rapidly. Estimating emissions from diesel trucks is a critical work in mobile source emission inventories due to the importance and difficulties associated with estimating emissions from diesel trucks. Although diesel trucks are major contributors of nitrogen oxide (NOx) and primary particulate matter smaller than 2.5 μm (PM2.5), there are still more obstacles on the existing estimation of diesel truck emissions compared with that of cars; long-range freight transportation activities are complicated, and much of the basic data remain unclear. Most of existing inventories were based on local registration number. However, according to our research, a large number of trucks are conducting long-distance inter-city or inter province transportation. Instead of the local registration number based approach, a road emission intensity-based (REIB) approach is introduced in this research. To provide efficient data for the REIB approach, 1060 questionnaire responses and approximately 1.7 million valid seconds of onboard GPS monitoring data were collected. Both the questionnaire answers and GPS monitoring results indicated that the driving conditions on different types of road have significant impacts on the emission levels of freight trucks. We present estimated emissions of NOx and primary PM2.5 from diesel freight trucks for China in 2011. Using the REIB approach, the activity level and distribution data are obtained from the questionnaire answers. Emission factors are calculated with the International Vehicle Emission (IVE) model that interpolated local on-board measurement results in China according to the GPS monitoring data on different roads. Depending on the results in this research, the largest differences among the emission factors (in g km-1) on different roads exceed 70 and 50% for NOx and PM2.5, respectively. The differences were caused by different driving conditions that we monitored via GPS. The estimated NOx and PM2.5 emissions from diesel freight trucks in China were 5.0 (4.8-7.2) million t and 0.20 (0.17-0.22) million t, respectively, via the REIB approach in 2011. Another implication of this research is that different road infrastructure would have different impacts for NOx and PM2.5 emissions. A region with more inter-city freeways or national roads tends to have more NOx emissions, while urban streets play a more important role in primary PM2.5 emissions from freight trucks. Compared with former studies, which allocate emissions according to local truck registration number and neglect inter-region long distance transport trips, the REIB approach has advantages regarding the allocation of diesel truck emissions into the provinces. Furthermore, the different driving conditions on the different roads types are no longer overlooked with this approach.
Koblitz, Jens C.; Fleming, Theodore H.; Medellín, Rodrigo A.; Kalko, Elisabeth K. V.; Schnitzler, Hans-Ulrich; Tschapka, Marco
2016-01-01
Nectar-feeding bats show morphological, physiological, and behavioral adaptations for feeding on nectar. How they find and localize flowers is still poorly understood. While scent cues alone allow no precise localization of a floral target, the spatial properties of flower echoes are very precise and could play a major role, particularly at close range. The aim of this study is to understand the role of echolocation for classification and localization of flowers. We compared the approach behavior of Leptonycteris yerbabuenae to flowers of a columnar cactus, Pachycereus pringlei, to that to an acrylic hollow hemisphere that is acoustically conspicuous to bats, but has different acoustic properties and, contrary to the cactus flower, present no scent. For recording the flight and echolocation behaviour we used two infrared video cameras under stroboscopic illumination synchronized with ultrasound recordings. During search flights all individuals identified both targets as a possible food source and initiated an approach flight; however, they visited only the cactus flower. In experiments with the acrylic hemisphere bats aborted the approach at ca. 40–50 cm. In the last instant before the flower visit the bats emitted a long terminal group of 10–20 calls. This is the first report of this behaviour for a nectar-feeding bat. Our findings suggest that L. yerbabuenae use echolocation for classification and localization of cactus flowers and that the echo-acoustic characteristics of the flower guide the bats directly to the flower opening. PMID:27684373
Multiple approaches to microbial source tracking in tropical northern Australia
Neave, Matthew; Luter, Heidi; Padovan, Anna; Townsend, Simon; Schobben, Xavier; Gibb, Karen
2014-01-01
Microbial source tracking is an area of research in which multiple approaches are used to identify the sources of elevated bacterial concentrations in recreational lakes and beaches. At our study location in Darwin, northern Australia, water quality in the harbor is generally good, however dry-season beach closures due to elevated Escherichia coli and enterococci counts are a cause for concern. The sources of these high bacteria counts are currently unknown. To address this, we sampled sewage outfalls, other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall (Larrakeyah discharge) was a source of bacteria, including fecal bacteria that impacted nearby beaches. Two other treated effluent discharges did not appear to influence sites other than those directly adjacent. Several beaches contained fecal indicator bacteria that likely originated from urban rivers and creeks within the catchment. Generally, connectivity between the sites was observed within distinct geographical locations and it appeared that most of the bacterial contamination on Darwin beaches was confined to local sources. PMID:25224738
NASA Astrophysics Data System (ADS)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
Determining the sources of fine-grained sediment using the Sediment Source Assessment Tool (Sed_SAT)
Gorman Sanisaca, Lillian E.; Gellis, Allen C.; Lorenz, David L.
2017-07-27
A sound understanding of sources contributing to instream sediment flux in a watershed is important when developing total maximum daily load (TMDL) management strategies designed to reduce suspended sediment in streams. Sediment fingerprinting and sediment budget approaches are two techniques that, when used jointly, can qualify and quantify the major sources of sediment in a given watershed. The sediment fingerprinting approach uses trace element concentrations from samples in known potential source areas to determine a clear signature of each potential source. A mixing model is then used to determine the relative source contribution to the target suspended sediment samples.The computational steps required to apportion sediment for each target sample are quite involved and time intensive, a problem the Sediment Source Assessment Tool (Sed_SAT) addresses. Sed_SAT is a user-friendly statistical model that guides the user through the necessary steps in order to quantify the relative contributions of sediment sources in a given watershed. The model is written using the statistical software R (R Core Team, 2016b) and utilizes Microsoft Access® as a user interface but requires no prior knowledge of R or Microsoft Access® to successfully run the model successfully. Sed_SAT identifies outliers, corrects for differences in size and organic content in the source samples relative to the target samples, evaluates the conservative behavior of tracers used in fingerprinting by applying a “Bracket Test,” identifies tracers with the highest discriminatory power, and provides robust error analysis through a Monte Carlo simulation following the mixing model. Quantifying sediment source contributions using the sediment fingerprinting approach provides local, State, and Federal land management agencies with important information needed to implement effective strategies to reduce sediment. Sed_SAT is designed to assist these agencies in applying the sediment fingerprinting approach to quantify sediment sources in the sediment TMDL framework.
NASA Astrophysics Data System (ADS)
Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle
2012-12-01
Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals. Eventually, some insights into the numerical complexity of these four algorithms are given.
CIRSS vertical data integration, San Bernardino study
NASA Technical Reports Server (NTRS)
Hodson, W.; Christenson, J.; Michel, R. (Principal Investigator)
1982-01-01
The creation and use of a vertically integrated data base, including LANDSAT data, for local planning purposes in a portion of San Bernardino County, California are described. The project illustrates that a vertically integrated approach can benefit local users, can be used to identify and rectify discrepancies in various data sources, and that the LANDSAT component can be effectively used to identify change, perform initial capability/suitability modeling, update existing data, and refine existing data in a geographic information system. Local analyses were developed which produced data of value to planners in the San Bernardino County Planning Department and the San Bernardino National Forest staff.
Detecting black bear source-sink dynamics using individual-based genetic graphs.
Draheim, Hope M; Moore, Jennifer A; Etter, Dwayne; Winterstein, Scott R; Scribner, Kim T
2016-07-27
Source-sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source-sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source-sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source-sink dynamics and their implications on harvest management of game species. © 2016 The Author(s).
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
NASA Astrophysics Data System (ADS)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.
2015-10-01
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.
Multi-Disciplinary Approach to Trace Contamination of Streams and Beaches
Nickles, James
2008-01-01
Concentrations of fecal-indicator bacteria in urban streams and ocean beaches in and around Santa Barbara occasionally can exceed public-health standards for recreation. The U.S. Geological Survey (USGS), working with the City of Santa Barbara, has used multi-disciplinary science to trace the sources of the bacteria. This research is helping local agencies take steps to improve recreational water quality. The USGS used an approach that combined traditional hydrologic and microbiological data, with state-of-the-art genetic, molecular, and chemical tracer analysis. This research integrated physical data on streamflow, ground water, and near-shore oceanography, and made extensive use of modern geophysical and isotopic techniques. Using those techniques, the USGS was able to evaluate the movement of water and the exchange of ground water with near-shore ocean water. The USGS has found that most fecal bacteria in the urban streams came from storm-drain discharges, with the highest concentrations occurring during storm flow. During low streamflow, the concentrations varied as much as three-fold, owing to variable contribution of non-point sources such as outdoor water use and urban runoff to streamflow. Fecal indicator bacteria along ocean beaches were from both stream discharge to the ocean and from non-point sources such as bird fecal material that accumulates in kelp and sand at the high-tide line. Low levels of human-specific Bacteroides, suggesting fecal material from a human source, were consistently detected on area beaches. One potential source, a local sewer line buried beneath the beach, was found not to be responsible for the fecal bacteria.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
MTSAT: Full Disk - NOAA GOES Geostationary Satellite Server
GOES Himawari-8 Indian Ocean Meteosat HEMISPHERIC GOES Atlantic Source | Local GOES West Himawari-8 Meteosat CONTINENTAL PACUS CONUS Source | Local REGIONAL GOES-West Northwest West Central Southwest GOES -East Regional Page Source | Local Pacific Northwest Source | Local Northern Rockies Source | Local
USDA-ARS?s Scientific Manuscript database
The approach of anaerobic soil disinfestation (ASD) in Florida, a method for pre-plant soil treatment, consists of combining the application of the molasses (C source) with the application of composted poultry litter (CPL) as an organic amendment. However, CPL is not always available locally and is...
Hispanic Youth Employment Guidebook: Local Government Approaches Using Public-Private Partnerships.
ERIC Educational Resources Information Center
Nieto, Margaret; Kubo, Christine, Ed.
This guidebook was developed as an aid to communities seeking to create partnerships between public and private sector sources to reduce the youth unemployment among Hispanics. The International City Management Association (ICMA), as part of a project sponsored by the Department of Health and Human Services, identifies six model ventures using the…
Evaluation of coded aperture radiation detectors using a Bayesian approach
NASA Astrophysics Data System (ADS)
Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur
2016-12-01
We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
Ray propagation in oblate atmospheres. [for Jupiter
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1976-01-01
Phinney and Anderson's (1968) exact theory for the inversion of radio-occultation data for planetary atmospheres breaks down seriously when applied to occultations by oblate atmospheres because of departures from Bouguer's law. It has been proposed that this breakdown can be overcome by transforming the theory to a local spherical symmetry which osculates a ray's point of closest approach. The accuracy of this transformation procedure is assessed by evaluating the size of terms which are intrinsic to an oblate atmosphere and which are not eliminated by a local spherical approximation. The departures from Bouguer's law are analyzed, and it is shown that in the lowest-order deviation from that law, the plane of refraction is defined by the normal to the atmosphere at closest approach. In the next order, it is found that the oblateness of the atmosphere 'warps' the ray path out of a single plane, but the effect appears to be negligible for most purposes. It is concluded that there seems to be no source of serious error in making an approximation of local spherical symmetry with the refraction plane defined by the normal at closest approach.
Energy storage requirements of dc microgrids with high penetration renewables under droop control
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...
2015-01-09
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less
Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.
Parr, Andreas; Miesen, Robert; Vossiek, Martin
2016-06-25
In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego
2012-03-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (the transport-coefficient ratio χ/χ˜10^10 in fusion plasmas). Recently, a novel Lagrangian Green's function method has been proposedfootnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011); D. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, submitted (2011) to solve the local and non-local purely parallel transport equation in general 3D magnetic fields. The approach avoids numerical pollution, is inherently positivity-preserving, and is scalable algorithmically (i.e., work per degree-of-freedom is grid-independent). In this poster, we discuss the extension of the Lagrangian Green's function approach to include perpendicular transport terms and sources. We present an asymptotic-preserving numerical formulation, which ensures a consistent numerical discretization temporally and spatially for arbitrary χ/χ ratios. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.
NASA Astrophysics Data System (ADS)
Gao, Y.
2017-12-01
Regional precipitation recycling (i.e., the contribution of local evaporation to local precipitation) is an important component of water cycle over the Tibetan Plateau (TP). Two methods were used to investigate regional precipitation recycling: 1) tracking of tagged atmospheric water parcels originating from evaporation in a source region (i.e., E-tagging), and 2) back-trajectory approach to track the evaporative sources contributed to precipitation in a specific region. These two methods were applied to Weather Research and Forecasting (WRF) regional climate simulations to quantify the precipitation recycling ratio in the TP for three selected years: climatologically normal, dry and wet year. The simulation region is characterized by high average elevation above 4000 m and complex terrain. The back-trajectory approach is also calculated over three sub-regions over the TP: namely western, northeastern and southeastern TP, and the E-tagging approach could provide recycling-ratio distributions over the whole TP. Three aspects are investigated to characterize the precipitation recycling: annual mean, seasonal variations and spatial distributions. Averaged over the TP, the precipitation recycling ratio estimated by the E-tagging approach is higher than that from the back-trajectory method. The back-trajectory approach uses a precipitation threshold as total precipitation in five days divided by a random number, and this number was set to 500 as a tread off between equilibrium and computational efficiency. Lower recycling ratio derived from the back-trajectory approach is related to the precipitation threshold used. The E-tagging, however, tracks every air parcel of evaporation regardless of the precipitation amount. There is no obvious seasonal variation in the recycling ratio using both methods. The E-tagging approach shows high recycling ratios in the center TP, indicating stronger land-atmospheric interactions than elsewhere.
NASA Astrophysics Data System (ADS)
Lee, G.-S.; Kim, P.-R.; Han, Y.-J.; Holsen, T. M.; Seo, Y.-S.; Yi, S.-M.
2015-11-01
As a global pollutant, mercury (Hg) is of particular concern in East Asia where anthropogenic emissions are the largest. In this study, speciated Hg concentrations were measured in the western most island in Korea, located between China and the Korean mainland to identify the importance of local, regional and distant Hg sources. Various tools including correlations with other pollutants, conditional probability function, and back-trajectory based analysis consistently indicated that Korean sources were important for gaseous oxidized mercury (GOM) whereas, for total gaseous mercury (TGM) and particulate bound mercury (PBM), long-range and regional transport were also important. A trajectory cluster based approach considering both Hg concentration and the fraction of time each cluster was impacting the site was developed to quantify the effect of Korean sources and out-of-Korean source. This analysis suggests that Korean sources contributed approximately 55 % of the GOM and PBM while there were approximately equal contributions from Korean and out-of-Korean sources for the TGM measured at the site. The ratio of GOM / PBM decreased when the site was impacted by long-range transport, suggesting that this ratio may be a useful tool for identifying the relative significance of local sources vs. long-range transport. The secondary formation of PBM through gas-particle partitioning with GOM was found to be important at low temperatures and high relative humidity.
Can two dots form a Gestalt? Measuring emergent features with the capacity coefficient.
Hawkins, Robert X D; Houpt, Joseph W; Eidels, Ami; Townsend, James T
2016-09-01
While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bradley, E. S.; Leifer, I.; Roberts, D.; Dennison, P. E.; Margolis, J.; Moritsch, M.; Diskin, G. S.; Sachse, G. W.
2009-12-01
The Coal Oil Point (COP) hydrocarbon seep field off the coast of Santa Barbara, CA is one of the most active and best-studied marine geologic methane sources in the world and contributes to elevated terrestrial methane concentrations downwind. In this study, we investigate the spatiotemporal variability of this local source and the influence of meteorological conditions on transport and concentration. A methane plume emanating from Trilogy Seep was mapped with the Airborne Visible Infrared Imaging Spectrometer at a 7.5 m resolution with a short-wave infrared band ratio technique. This structure agrees with the local wind speed and direction and is orthogonal to the surface currents. ARCTAS-CARB aircraft in situ sampling of lower-troposphere methane is compared to sub-hour total hydrocarbon concentration (THC) measurements from the Santa Barbara Air Pollution Control District (SBAPCD) station located near COP. Hourly SBAPCD THC values from 1980-2008 demonstrate a decrease in seep source strength until the late 1990s, followed by a consistent increase. The occurrence of elevated SBAPCD THC values for onshore wind conditions as well as numerous positive outliers as high as 17 ppm suggests that seep field emissions are both quasi-steady state and transient, direct (bubble) and diffuse (outgassing). As demonstrated for the COP seeps, the combination of imaging spectrometry, aircraft in situ sampling, and ground-based monitoring provides a powerful approach for understanding local methane sources and transport processes.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Dorsolateral frontal lobe epilepsy.
Lee, Ricky W; Worrell, Greg A
2012-10-01
Dorsolateral frontal lobe seizures often present as a diagnostic challenge. The diverse semiologies may not produce lateralizing or localizing signs and can appear bizarre and suggest psychogenic events. Unfortunately, scalp electroencephalographic (EEG) and magnetic resonance imaging (MRI) are often unsatisfactory. It is not uncommon that these traditional diagnostic studies are either unhelpful or even misleading. In some cases, SPECT and positron emission tomography imaging can be an effective tool to identify the origin of seizures. However, these techniques and other emerging techniques all have limitations, and new approaches are needed to improve source localization.
NASA Astrophysics Data System (ADS)
Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.
2008-02-01
Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.
A hybrid modeling with data assimilation to evaluate human exposure level
NASA Astrophysics Data System (ADS)
Koo, Y. S.; Cheong, H. K.; Choi, D.; Kim, A. L.; Yun, H. Y.
2015-12-01
Exposure models are designed to better represent human contact with PM (Particulate Matter) and other air pollutants such as CO, SO2, O3, and NO2. The exposure concentrations of the air pollutants to human are determined by global and regional long range transport of global and regional scales from Europe and China as well as local emissions from urban and road vehicle sources. To assess the exposure level in detail, the multiple scale influence from background to local sources should be considered. A hybrid air quality modeling methodology combing a grid-based chemical transport model with a local plume dispersion model was used to provide spatially and temporally resolved air quality concentration for human exposure levels in Korea. In the hybrid modeling approach, concentrations from a grid-based chemical transport model and a local plume dispersion model are added to provide contributions from photochemical interactions, long-range (regional) transport and local-scale dispersion. The CAMx (Comprehensive Air quality Model with Extensions was used for the background concentrations from anthropogenic and natural emissions in East Asia including Korea while the road dispersion by vehicle emission was calculated by CALPUFF model. The total exposure level of the pollutants was finally assessed by summing the background and road contributions. In the hybrid modeling, the data assimilation method based on the optimal interpolation was applied to overcome the discrepancies between the model predicted concentrations and observations. The air quality data from the air quality monitoring stations in Korea. The spatial resolution of the hybrid model was 50m for the Seoul Metropolitan Ares. This example clearly demonstrates that the exposure level could be estimated to the fine scale for the exposure assessment by using the hybrid modeling approach with data assimilation.
Refsgaard, A; Jacobsen, T; Jacobsen, B; Ørum, J-E
2007-01-01
The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse the effects of specific, localized basin water management plans. The paper also includes a land rent modelling approach which can be used to choose the most cost-effective measures and the location of these measures. As a forerunner to the use of basin-scale models in WFD basin water management plans this project demonstrates the potential and limitations of comprehensive, integrated modelling tools.
NASA Astrophysics Data System (ADS)
Amodio, M.; Andriani, E.; Daresta, B. E.; de Gennaro, G.; di Gilio, A.; Ielpo, P.,; Placentino, C. M.; Trizio, L.; Tutino, M.
2010-05-01
Several epidemiological studies have shown the negative effects of air pollution on human health, which range from respiratory and cardiovascular disease to neurotoxic effects, and cancer. Most recent investigations have been focused on health toxicological features of Particulate Matter (PM) and its interactions with other pollutants: it was found that fine particles (PM2.5) could be an effective media to transport these pollutants deeply into the lung and to cause many kind of reactions which include oxidative stress, local pulmonary and systemic inflammatory responses (Künzli and Perez, 2009). Based on these implications on public health, many countries have developed plans to suggest effective control strategies which involve the identification of Particulate Matter sources, the quantitative estimation of the emission rates of the pollutants, the understanding of PM transport, mixing and transformation processes and the identification of main factors influencing PM concentrations. In this field, receptor models can be useful tools to estimate sources contributions to PM collected in an area under investigations. Different approaches to receptor model analysis can be distinguished on basis of whether chemical characteristics of emission sources are required to be known before the source apportionment. The multivariate approach could be preferred when a lack of information concerning sources profiles occurred (Hopke, 2003). In this work, the results obtained by applying an integrated approach in the monitoring of PM using several typologies of instrumentations will be shown. A prototype for the determination of the contributions of a single source (‘fugitive emission') on the fine PM concentrations has been developed: it consists of a Swam dual-channel sampler, an OPC Monitor, a sonic anemometer and a PBL Mixing monitor. The investigated site chosen for the application of prototype will be the iron and steel pole of Taranto (Apulia Region, South of Italy). Fugitive emission campaign will be performed by using three different positions around the Taranto industrial area; the main interest on Taranto is due to the presence of several activities of high impact as very wide industrial area close to the town and the numerous maritime and military activities in the harbour area (Amodio et al., 2008). The aim is to triangulate the area of the examined source on the basis of the prevalent directions of the wind. The investigation will be completed by chemical-physical characterization of PM2.5 and PM10 samples collected by the prototype in order to have additional information about the possible emissive sources. The statistical analysis, performed by Principal Component Analysis (PCA) and Positive Matrix Factorization (PMF), will be used for a detailed study of the impact of the local emissive source on the neighboring areas. Finally, the prototype will allow to identify and distinguish long range transport, regional and other local contributions on the fine PM concentrations. This work was supported by the Strategic Project PS_122 founded by Apulia Region. References Künzli, N., Perez, L., 2009. Swiss Medical Weekly 139(17-18), 242-250. Hopke, P.K., 2003. Journal of Chemometrics 17(5), 255-265. Amodio, M., Caselli, M., Daresta, B.E., de Gennaro, G., Ielpo, P., Placentino, C.M., Tutino, 2008. Chemical Engineering Transactions 16, 193-199.
A Child Abuse Assessment Center: Alternative Investigative Approaches.
ERIC Educational Resources Information Center
Hiester, Douglas S.
A child abuse assessment center was created in Dade County, Florida, and was funded by state and local government sources. Staff includes a project director, two clinical social workers, a follow-up case monitor, clerical support, and a psychologist. The center attempts to minimize trauma to the child victim of sexual and physical abuse by a…
Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel
The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.
In vivo time-gated diffuse correlation spectroscopy at quasi-null source-detector separation.
Pagliazzi, M; Sekar, S Konugolu Venkata; Di Sieno, L; Colombo, L; Durduran, T; Contini, D; Torricelli, A; Pifferi, A; Mora, A Dalla
2018-06-01
We demonstrate time domain diffuse correlation spectroscopy at quasi-null source-detector separation by using a fast time-gated single-photon avalanche diode without the need of time-tagging electronics. This approach allows for increased photon collection, simplified real-time instrumentation, and reduced probe dimensions. Depth discriminating, quasi-null distance measurement of blood flow in a human subject is presented. We envision the miniaturization and integration of matrices of optical sensors of increased spatial resolution and the enhancement of the contrast of local blood flow changes.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery
NASA Astrophysics Data System (ADS)
Prasad, S.; Yu, Z.
The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.
Tracing freshwater nitrate sources in pre-alpine groundwater catchments using environmental tracers
NASA Astrophysics Data System (ADS)
Stoewer, M. M.; Knöller, K.; Stumpp, C.
2015-05-01
Groundwater is one of the main resources for drinking water. Its quality is still threatened by the widespread contaminant nitrate (NO3-). In order to manage groundwater resources in a sustainable manner, we need to find options of lowering nitrate input. Particularly, a comprehensive knowledge of nitrate sources is required in areas which are important current and future drinking water reservoirs such as pre-alpine aquifers covered with permanent grassland. The objective of the present study was to identify major sources of nitrate in groundwater with low mean nitrate concentrations (8 ± 2 mg/L). To achieve the objective, we used environmental tracer approaches in four pre-alpine groundwater catchments. The stable isotope composition and tritium content of water were used to study the hydrogeology and transit times. Furthermore, nitrate stable isotope methods were applied to trace nitrogen from its sources to groundwater. The results of the nitrate isotope analysis showed that groundwater nitrate was derived from nitrification of a variety of ammonium sources such as atmospheric deposition, mineral and organic fertilizers and soil organic matter. A direct influence of mineral fertilizer, atmospheric deposition and sewage was excluded. Since temporal variation in stable isotopes of nitrate were detected only in surface water and locally at one groundwater monitoring well, aquifers appeared to be well mixed and influenced by a continuous nitrate input mainly from soil derived nitrogen. Hydrogeological analysis supported that the investigated aquifers were less vulnerable to rapid impacts due to long average transit times, ranging from 5 to 21 years. Our study revealed the importance of combining environmental tracer approaches and a comprehensive sampling campaign (local sources of nitrate, soil water, river water, and groundwater) to identify the nitrate sources in groundwater and its vulnerability. In future, the achieved results will help develop targeted strategies for a sustainable groundwater management focusing more on soil nitrogen storage.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less
Global Infrasound Association Based on Probabilistic Clutter Categorization
NASA Astrophysics Data System (ADS)
Arora, N. S.; Mialle, P.
2015-12-01
The IDC collects waveforms from a global network of infrasound sensors maintained by the IMS, and automatically detects signal onsets and associates them to form event hypotheses. However, a large number of signal onsets are due to local clutter sources such as microbaroms (from standing waves in the oceans), waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NET-VISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydro-acoustic and infrasound processing built on a unified probabilistic framework. Notes: The attached figure shows all the unassociated arrivals detected at IMS station I09BR for 2012 distributed by azimuth and center frequency. (The title displays the bandwidth of the kernel density estimate along the azimuth and frequency dimensions).This plot shows multiple micro-barom sources as well as other sources of infrasound clutter. A diverse clutter-field such as this one is quite common for most IMS infrasound stations, and it highlights the dangers of forming events without due consideration of this source of noise. References: [1] Infrasound categorization Towards a statistics-based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NET-VISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013.
Local surface curvature analysis based on reflection estimation
NASA Astrophysics Data System (ADS)
Lu, Qinglin; Laligant, Olivier; Fauvet, Eric; Zakharova, Anastasia
2015-07-01
In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.
Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao
2016-01-01
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379
Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao
2016-03-19
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio
2016-04-01
Objective. Medical intractable epilepsy is a common condition that affects 40% of epileptic patients that generally have to undergo resective surgery. Magnetoencephalography (MEG) has been increasingly used to identify the epileptogenic foci through equivalent current dipole (ECD) modeling, one of the most accepted methods to obtain an accurate localization of interictal epileptiform discharges (IEDs). Modeling requires that MEG signals are adequately preprocessed to reduce interferences, a task that has been greatly improved by the use of blind source separation (BSS) methods. MEG recordings are highly sensitive to metallic interferences originated inside the head by implanted intracranial electrodes, dental prosthesis, etc and also coming from external sources such as pacemakers or vagal stimulators. To reduce these artifacts, a BSS-based fully automatic procedure was recently developed and validated, showing an effective reduction of metallic artifacts in simulated and real signals (Migliorelli et al 2015 J. Neural Eng. 12 046001). The main objective of this study was to evaluate its effects in the detection of IEDs and ECD modeling of patients with focal epilepsy and metallic interference. Approach. A comparison between the resulting positions of ECDs was performed: without removing metallic interference; rejecting only channels with large metallic artifacts; and after BSS-based reduction. Measures of dispersion and distance of ECDs were defined to analyze the results. Main results. The relationship between the artifact-to-signal ratio and ECD fitting showed that higher values of metallic interference produced highly scattered dipoles. Results revealed a significant reduction on dispersion using the BSS-based reduction procedure, yielding feasible locations of ECDs in contrast to the other two approaches. Significance. The automatic BSS-based method can be applied to MEG datasets affected by metallic artifacts as a processing step to improve the localization of epileptic foci.
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
Perceived sources of stress and resilience in men in an African American community.
Chung, Bowen; Meldrum, Marcia; Jones, Felica; Brown, Anthony; Jones, Loretta
2014-01-01
Little is known about the perceived causes of stress and what strategies African American men use to promote resiliency. Participatory research approaches are recommended as an approach to engage minority communities. A key goal of participatory research is to shift the locus of control to community partners. To understand perceived sources of stress and tools used to promote resiliency in African American men in South Los Angeles. Our study utilized a community-partnered participatory research approach to collect and analyze open-ended responses from 295 African American men recruited at a local, cultural festival in Los Angeles using thematic analysis and the Levels of Racism framework. Almost all men (93.2%) reported stress. Of those reporting stress, 60.8% reported finances and money and 43.2% reported racism as a specific cause. More than 60% (63.4%) reported that they perceived available sources of help to deal with stress. Of those noting a specific source of help for stress (n = 76), 42.1% identified religious faith. Almost all of participants (92.1%) mentioned specific sources of resiliency such as religion and family. Stress owing to psychosocial factors such as finances and racism are common among African American men. But, at the same time, most men found support for resiliency to ameliorate stress in religion and family. Future work to engage African American men around alleviating stress and supporting resiliency should both take into account the perceived causes of stress and incorporate culturally appropriate sources of resiliency support.
NASA Astrophysics Data System (ADS)
Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.
2011-07-01
Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.
Grabtchak, Serge; Palmer, Tyler J; Whelan, William M
2011-07-01
Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ∼43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.
[Data sources, the data used, and the modality for collection].
Mercier, G; Costa, N; Dutot, C; Riche, V-P
2018-03-01
The hospital costing process implies access to various sources of data. Whether a micro-costing or a gross-costing approach is used, the choice of the methodology is based on a compromise between the cost of data collection, data accuracy, and data transferability. This work describes the data sources available in France and the access modalities that are used, as well as the main advantages and shortcomings of: (1) the local unit costs, (2) the hospital analytical accounting, (3) the Angers database, (4) the National Health Cost Studies, (5) the INTER CHR/U databases, (6) the Program for Medicalizing Information Systems, and (7) the public health insurance databases. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Rigotti, N A
1999-01-01
To start smoking, young people need a supply of tobacco products. Reducing youth access to tobacco is a new approach to preventing tobacco use that has been a focus of federal, state, and local tobacco control efforts over the past decade. All 50 states ban tobacco sales to minors, but compliance is poor because laws are not enforced. Consequently, young people have little trouble obtaining tobacco products. Commercial sources of tobacco (stores and vending machines) are important for underage smokers, who often purchase their own cigarettes. Underage youths also obtain tobacco from noncommercial sources such as friends, relatives, older adolescents, and adults. Educating retailers about tobacco sales laws has not produced long-term improvement in their compliance. Active enforcement of tobacco sales laws changes retailer behavior, but whether this reduces young people's access to tobacco or their tobacco use is not clear. The effectiveness of new local, state, and federal actions that aim to reduce youth access to tobacco remains to be determined. Can enforcing tobacco sales laws reduce young people's access to tobacco? If so, will this prevent or delay the onset of their tobacco use? How will youths' sources of tobacco change as commercial sources are restricted? What are the social (noncommercial) sources of tobacco for minors and how can youths' access to tobacco from these sources be reduced? What is the impact of the new federal policies aimed at reducing youth access to tobacco? Do new state and local laws that ban youth possession or use of tobacco have a net positive or negative impact on youth attitudes, access to tobacco, or tobacco use? What is the relative effectiveness and cost-effectiveness of efforts to reduce the supply of tobacco compared to those that aim to reduce demand for tobacco? Will either work alone or are both necessary to achieve reductions in youth smoking?
Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data
Detwiler, Landon T.; Suciu, Dan; Franklin, Joshua D.; Moore, Eider B.; Poliakov, Andrew V.; Lee, Eunjung S.; Corina, David P.; Ojemann, George A.; Brinkley, James F.
2008-01-01
This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts. PMID:19198662
Surgical Site Infiltration for Abdominal Surgery: A Novel Neuroanatomical-based Approach
Janis, Jeffrey E.; Haas, Eric M.; Ramshaw, Bruce J.; Nihira, Mikio A.; Dunkin, Brian J.
2016-01-01
Background: Provision of optimal postoperative analgesia should facilitate postoperative ambulation and rehabilitation. An optimal multimodal analgesia technique would include the use of nonopioid analgesics, including local/regional analgesic techniques such as surgical site local anesthetic infiltration. This article presents a novel approach to surgical site infiltration techniques for abdominal surgery based upon neuroanatomy. Methods: Literature searches were conducted for studies reporting the neuroanatomical sources of pain after abdominal surgery. Also, studies identified by preceding search were reviewed for relevant publications and manually retrieved. Results: Based on neuroanatomy, an optimal surgical site infiltration technique would consist of systematic, extensive, meticulous administration of local anesthetic into the peritoneum (or preperitoneum), subfascial, and subdermal tissue planes. The volume of local anesthetic would depend on the size of the incision such that 1 to 1.5 mL is injected every 1 to 2 cm of surgical incision per layer. It is best to infiltrate with a 22-gauge, 1.5-inch needle. The needle is inserted approximately 0.5 to 1 cm into the tissue plane, and local anesthetic solution is injected while slowly withdrawing the needle, which should reduce the risk of intravascular injection. Conclusions: Meticulous, systematic, and extensive surgical site local anesthetic infiltration in the various tissue planes including the peritoneal, musculofascial, and subdermal tissues, where pain foci originate, provides excellent postoperative pain relief. This approach should be combined with use of other nonopioid analgesics with opioids reserved for rescue. Further well-designed studies are necessary to assess the analgesic efficacy of the proposed infiltration technique. PMID:28293525
Wardrop, N. A.; Jochem, W. C.; Bird, T. J.; Chamberlain, H. R.; Clarke, D.; Kerr, D.; Bengtsson, L.; Juran, S.; Seaman, V.; Tatem, A. J.
2018-01-01
Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. PMID:29555739
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Sparse source configurations in radio tomography of asteroids
NASA Astrophysics Data System (ADS)
Pursiainen, S.; Kaasalainen, M.
2014-07-01
Our research targets at progress in non-invasive imaging of asteroids to support future planetary research and extra-terrestrial mining activities. This presentation concerns principally radio tomography in which the permittivity distribution inside an asteroid is to be recovered based on the radio frequency signal transmitted from the asteroid's surface and gathered by an orbiter. The focus will be on a sparse distribution (Pursiainen and Kaasalainen, 2013) of signal sources that can be necessary in the challenging in situ environment and within tight payload limits. The general goal in our recent research has been to approximate the minimal number of source positions needed for robust localization of anomalies caused, for example, by an internal void. Characteristic to the localization problem are the large relative changes in signal speed caused by the high permittivity of typical asteroid minerals (e.g. basalt), meaning that a signal path can include strong refractions and reflections. This presentation introduces results of a laboratory experiment in which real travel time data was inverted using a hierarchical Bayesian approach combined with the iterative alternating sequential (IAS) posterior exploration algorithm. Special interest was paid to robustness of the inverse results regarding changes of the prior model and source positioning. According to our results, strongly refractive anomalies can be detected with three or four sources independently of their positioning.
Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao
2017-10-01
A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.
The Development of Local Private Primary and Secondary Schooling in Hong Kong, 1841-2012
ERIC Educational Resources Information Center
Cheung, Alan C. K.; Randall, E. Vance; Tam, Man Kwan
2016-01-01
Purpose: This paper is a historical review of the development of private primary and secondary education in Hong Kong from 1841-2012. The purpose of this paper is to examine the evolving relationship between the state and private schools in Hong Kong. Design/methodology/approach: This paper utilizes sources from published official documents,…
Tech Prep Implementation and Preliminary Student Outcomes for Eight Local Tech Prep Consortia.
ERIC Educational Resources Information Center
Bragg, Debra D.; Dare, Donna E.; Reger, W. M., IV; Ovaice, Ghazala; Zamani, Eboni M.; Layton, James D.; Dornsife, Carolyn J.; Vallee, Manuel; Brown, Carrie H.; Orr, Margaret Terry
The implementation and student outcomes of Tech Prep were examined in a study of eight consortia that represented a range of Tech Prep models and approaches in urban, suburban, and rural locations across the United States. Data were collected from the following sources: field visits; follow-up survey of Tech Prep participants and nonparticipants;…
Impact localization on composite structures using time difference and MUSIC approach
NASA Astrophysics Data System (ADS)
Zhong, Yongteng; Xiang, Jiawei
2017-05-01
1-D uniform linear array (ULA) has the shortcoming of the half-plane mirror effect, which does not allow discriminating between a target placed above the array and a target placed below the array. This paper presents time difference (TD) and multiple signal classification (MUSIC) based omni-directional impact localization on a large stiffened composite structure using improved linear array, which is able to perform omni-directional 360° localization. This array contains 2M+3 PZT sensors, where 2M+1 PZT sensors are arranged as a uniform linear array, and the other two PZT sensors are placed above and below the array. Firstly, the arrival times of impact signals observed by the other two sensors are determined using the wavelet transform. Compared with each other, the direction range of impact source can be decided in general, 0°to 180° or 180°to 360°. And then, two dimensional multiple signal classification (2D-MUSIC) based spatial spectrum formula using the uniform linear array is applied for impact localization by the general direction range. When the arrival times of impact signals observed by upper PZT is equal to that of lower PZT, the direction can be located in x axis (0°or 180°). And time difference based MUSIC method is present to locate impact position. To verify the proposed approach, the proposed approach is applied to a composite structure. The localization results are in good agreement with the actual impact occurring positions.
Facilitating Follow-up of LIGO-Virgo Events Using Rapid Sky Localization
NASA Astrophysics Data System (ADS)
Chen, Hsin-Yu; Holz, Daniel E.
2017-05-01
We discuss an algorithm for accurate and very low-latency (<1 s) localization of gravitational-wave (GW) sources using only the relative times of arrival, relative phases, and relative signal-to-noise ratios for pairs of detectors. The algorithm is independent of distances and masses to leading order, and can be generalized to all discrete (as opposed to stochastic and continuous) sources detected by ground-based detector networks. Our approach is similar to that of BAYESTAR with a few modifications, which result in increased computational efficiency. For the LIGO two-detector configuration (Hanford+Livingston) operating in O1 we find a median 50% (90%) localization of 143 deg2 (558 deg2) for binary neutron stars. We use our algorithm to explore the improvement in localization resulting from loud events, finding that the loudest out of the first 4 (or 10) events reduces the median sky-localization area by a factor of 1.9 (3.0) for the case of two GW detectors, and 2.2 (4.0) for three detectors. We also consider the case of multi-messenger joint detections in both the gravitational and the electromagnetic radiation, and show that joint localization can offer significant improvements (e.g., in the case of LIGO and Fermi/GBM joint detections). We show that a prior on the binary inclination, potentially arising from GRB observations, has a negligible effect on GW localization. Our algorithm is simple, fast, and accurate, and may be of particular utility in the development of multi-messenger astronomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro
2017-03-29
Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.
Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro
2017-01-01
Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error. PMID:28353676
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
NASA Astrophysics Data System (ADS)
Teng, Yichao; Zhang, Pin; Zhang, Baofu; Chen, Yiwang
2018-02-01
A scheme to realize low-phase-noise frequency-quadrupled microwave generation without any filter is demonstrated. In this scheme, a multimode optoelectronic oscillator is mainly contributed by dual-parallel Mach-Zehnder modulators, fiber, photodetector, and microwave amplifier. The local source signal is modulated by a child MZM (MZMa), which is worked at maximum transmission point. Through properly adjusting the bias voltages of the other child MZM (MZMb) and the parent MZM (MZMc), optical carrier is effectively suppressed and second sidebands are retained, then the survived optical signal is fed back to the photodetector and MZMb to form an optoelectronic hybrid resonator and realize frequency-quadrupled signal generation. Due to the high Q-factor and mode selection effect of the optoelectronic hybrid resonator, compared with the source signal, the generated frequency-quadrupled signal has a lower phase noise. The approach has verified by experiments, and 18, 22, and 26 GHz frequency-quadrupled signal are generated by 4.5, 5.5, and 6.5 GHz local source signals. Compared with 4.5 GHz source signal, the phase noise of generated 18 GHz signal at 10 kHz frequency offset has 26.5 dB reduction.
Refraction and Shielding of Noise in Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
1996-01-01
This paper examines the shielding effect of the mean flow and refraction of sound in non-axisymmetric jets. A general three-dimensional ray-acoustic approach is applied. The methodology is independent of the exit geometry and may account for jet spreading and transverse as well as streamwise flow gradients. We assume that noise is dominated by small-scale turbulence. The source correlation terms, as described by the acoustic analogy approach, are simplified and a model is proposed that relates the source strength to 7/2 power of turbulence kinetic energy. Local characteristics of the source such as its strength, time- or length-scale, convection velocity and characteristic frequency are inferred from the mean flow considerations. Compressible Navier Stokes equations are solved with a k-e turbulence model. Numerical predictions are presented for a Mach 1.5, aspect ratio 2:1 elliptic jet. The predicted sound pressure level directivity demonstrates favorable agreement with reported data, indicating a relative quiet zone on the side of the major axis of the elliptic jet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermann, Laura
2013-07-01
The complex interplay of politics, economics and culture undermines attempts to define universal best practices for public engagement in the management of nuclear materials. In the international context, communicators must rely on careful adaptation and creative execution to make standard communication techniques succeed in their local communities. Nuclear professionals need an approach to assess and adapt culturally specific public engagement strategies to meet the demands of their particular political, economic and social structures. Using participant interviews and public sources, the Potomac Communications Group reviewed country-specific examples of nuclear-related communication efforts to provide insight into a proposed approach. The review consideredmore » a spectrum of cultural dimensions related to diversity, authority, conformity, proximity and time. Comparisons help to identify cross-cultural influences of various public engagement tactics and to inform a framework for communicators. While not prescriptive in its application, the framework offers a way for communicators to assess the salience of outreach tactics in specific situations. The approach can guide communicators to evaluate and tailor engagement strategies to achieve localized public outreach goals. (authors)« less
Comparative LCA of decentralized wastewater treatment alternatives for non-potable urban reuse.
Opher, Tamar; Friedler, Eran
2016-11-01
Municipal wastewater (WW) effluent represents a reliable and significant source for reclaimed water, very much needed nowadays. Water reclamation and reuse has become an attractive option for conserving and extending available water sources. The decentralized approach to domestic WW treatment benefits from the advantages of source separation, which makes available simple small-scale systems and on-site reuse, which can be constructed on a short time schedule and occasionally upgraded with new technological developments. In this study we perform a Life Cycle Assessment to compare between the environmental impacts of four alternatives for a hypothetical city's water-wastewater service system. The baseline alternative is the most common, centralized approach for WW treatment, in which WW is conveyed to and treated in a large wastewater treatment plant (WWTP) and is then discharged to a stream. The three alternatives represent different scales of distribution of the WW treatment phase, along with urban irrigation and domestic non-potable water reuse (toilet flushing). The first alternative includes centralized treatment at a WWTP, with part of the reclaimed WW (RWW) supplied back to the urban consumers. The second and third alternatives implement de-centralized greywater (GW) treatment with local reuse, one at cluster level (320 households) and one at building level (40 households). Life cycle impact assessment results show a consistent disadvantage of the prevailing centralized approach under local conditions in Israel, where seawater desalination is the marginal source of water supply. The alternative of source separation and GW reuse at cluster level seems to be the most preferable one, though its environmental performance is only slightly better than GW reuse at building level. Centralized WW treatment with urban reuse of WWTP effluents is not advantageous over decentralized treatment of GW because the supply of RWW back to consumers is very costly in materials and energy. Electricity is a major driver of the impacts in most categories, pertaining mostly to potable water production and supply. Infrastructure was found to have a notable effect on metal depletion, human toxicity and freshwater and marine ecotoxicity. Sensitivity to major model parameters was analyzed. A shift to a larger share of renewable energy sources in the electricity mix results in a dramatic improvement in most impact categories. Switching to a mix of water sources, rather than the marginal source, leads to a significant reduction in most impacts. It is concluded that under the conditions tested, a decentralized approach to urban wastewater management is environmentally preferable to the common centralized system. It is worth exploring such options under different conditions as well, in cases which new urban infrastructure is planned or replacement of old infrastructure is required. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessment of Stable Isotope Distribution in Complex Systems
NASA Astrophysics Data System (ADS)
He, Y.; Cao, X.; Wang, J.; Bao, H.
2017-12-01
Biomolecules in living organisms have the potential to approach chemical steady state and even apparent isotope equilibrium because enzymatic reactions are intrinsically reversible. If an apparent local equilibrium can be identified, enzymatic reversibility and its controlling factors may be quantified, which helps to understand complex biochemical processes. Earlier research on isotope fractionation tends to focus on specific process and compare mostly two different chemical species. Using linear regression, "Thermodynamic order", which refers to correlated δ13C and 13β values, has been proposed to be present among many biomolecules by Galimov et al. However, the concept "thermodynamic order" they proposed and the approach they used has been questioned. Here, we propose that the deviation of a complex system from its equilibrium state can be rigorously described as a graph problem as is applied in discrete mathematics. The deviation of isotope distribution from equilibrium state and apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference matrix (|Δα|). Applying the |Δα| matrix analysis to earlier published data of amino acids, we show the existence of apparent local equilibrium among different amino acids in potato and a kind of green alga. The existence of apparent local equilibrium is in turn consistent with the notion that enzymatic reactions can be reversible even in living systems. The result also implies that previous emphasis on external carbon source intake may be misplaced when studying isotope distribution in physiology. In addition to the identification of local equilibrium among biomolecules, the difference matrix approach has the potential to explore chemical or isotope equilibrium state in extraterrestrial bodies, to distinguish living from non-living systems, and to classify living species. This approach will benefit from large numbers of systematic data and advanced pattern recognition techniques.
Identification and production of local carotene-rich foods to combat vitamin A malnutrition.
Solomons, N W; Bulux, J
1997-11-01
To address, with respect to improvement of human vitamin A status by dietary approaches, the three theoretical postulates that: 1) the most practical and economical manner to increase the amount of dietary vitamin A available to low-income persons in low-income nations is through plant sources of provitamin A carotenoids; 2) there will be constraints and limitation to the efficiency of a given intervention approach related to behavioural, cultural, biological and botanical considerations; and 3) the nature of these constraints and limitations must be understood, and then overcome where possible, to maximize the impact of such interventions on the vitamin A status of developing country populations. We review how local plant sources of provitamin A that would be acceptable for the at-risk populations and outline six settings and scenarios for the processing of carotene-rich foods: 1) cooking for hygiene; 2) long-term preservation; 3) compacting to reduce volume; 4) formulation for specific consumers; 5) improving bioavailability and bioconversion; and 6) to increase 'value added' in commerce. We describe our experiences in Guatemala (with sweet potato flakes), and those of others in the Caribbean, the African Sahel, and East Africa (with solar-drying for preservation of a variety of plants), and in Sri Lanka (with leaf concentrates) in promoting increased carotene-rich food intake, and the lessons learned from their evaluations. This overall approach to combatting endemic hypovitaminosis A in developing countries is evaluated within the constraints of: 1) the volumes of plant-based foods required to satisfy vitamin A requirements; and 2) the controversy over the true bioconversion efficiency of provitamin A from plant sources into the biologically-available active vitamin.
Where does streamwater come from in low-relief forested watersheds? A dual-isotope approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klaus, J.; McDonnell, J. J.; Jackson, C. R.
The time and geographic sources of streamwater in low-relief watersheds are poorly understood. This is partly due to the difficult combination of low runoff coefficients and often damped streamwater isotopic signals precluding traditional hydrograph separation and convolution integral approaches. Here we present a dual-isotope approach involving 18O and 2H of water in a low-angle forested watershed to determine streamwater source components and then build a conceptual model of streamflow generation. We focus on three headwater lowland sub-catchments draining the Savannah River Site in South Carolina, USA. Our results for a 3-year sampling period show that the slopes of the meteoricmore » water lines/evaporation water lines (MWLs/EWLs) of the catchment water sources can be used to extract information on runoff sources in ways not considered before. Our dual-isotope approach was able to identify unique hillslope, riparian and deep groundwater, and streamflow compositions. Thus, the streams showed strong evaporative enrichment compared to the local meteoric water line (δ 2H = 7.15 · δ 18O +9.28‰) with slopes of 2.52, 2.84, and 2.86. Based on the unique and unambiguous slopes of the EWLs of the different water cycle components and the isotopic time series of the individual components, we were able to show how the riparian zone controls baseflow in this system and how the riparian zone "resets" the stable isotope composition of the observed streams in our low-angle, forested watersheds. Although this approach is limited in terms of quantifying mixing percentages between different end-members, our dual-isotope approach enabled the extraction of hydrologically useful information in a region with little change in individual isotope time series.« less
Where does streamwater come from in low-relief forested watersheds? A dual-isotope approach
Klaus, J.; McDonnell, J. J.; Jackson, C. R.; ...
2015-01-08
The time and geographic sources of streamwater in low-relief watersheds are poorly understood. This is partly due to the difficult combination of low runoff coefficients and often damped streamwater isotopic signals precluding traditional hydrograph separation and convolution integral approaches. Here we present a dual-isotope approach involving 18O and 2H of water in a low-angle forested watershed to determine streamwater source components and then build a conceptual model of streamflow generation. We focus on three headwater lowland sub-catchments draining the Savannah River Site in South Carolina, USA. Our results for a 3-year sampling period show that the slopes of the meteoricmore » water lines/evaporation water lines (MWLs/EWLs) of the catchment water sources can be used to extract information on runoff sources in ways not considered before. Our dual-isotope approach was able to identify unique hillslope, riparian and deep groundwater, and streamflow compositions. Thus, the streams showed strong evaporative enrichment compared to the local meteoric water line (δ 2H = 7.15 · δ 18O +9.28‰) with slopes of 2.52, 2.84, and 2.86. Based on the unique and unambiguous slopes of the EWLs of the different water cycle components and the isotopic time series of the individual components, we were able to show how the riparian zone controls baseflow in this system and how the riparian zone "resets" the stable isotope composition of the observed streams in our low-angle, forested watersheds. Although this approach is limited in terms of quantifying mixing percentages between different end-members, our dual-isotope approach enabled the extraction of hydrologically useful information in a region with little change in individual isotope time series.« less
A unified approach to fluid-flow, geomechanical, and seismic modelling
NASA Astrophysics Data System (ADS)
Yarushina, Viktoriya; Minakov, Alexander
2016-04-01
The perturbations of pore pressure can generate seismicity. This is supported by observations from human activities that involve fluid injection into rocks at high pressure (hydraulic fracturing, CO2 storage, geothermal energy production) and natural examples such as volcanic earthquakes. Although the seismic signals that emerge during geotechnical operations are small both in amplitude and duration when compared to natural counterparts. A possible explanation for the earthquake source mechanism is based on a number of in situ stress measurements suggesting that the crustal rocks are close to its plastic yield limit. Hence, a rapid increase of the pore pressure decreases the effective normal stress, and, thus, can trigger seismic shear deformation. At the same time, little attention has been paid to the fact that the perturbation of fluid pressure itself represents an acoustic source. Moreover, non-double-couple source mechanisms are frequently reported from the analysis of microseismicity. A consistent formulation of the source mechanism describing microseismic events should include both a shear and isotropic component. Thus, improved understanding of the interaction between fluid flow and seismic deformation is needed. With this study we aim to increase the competence in integrating real-time microseismic monitoring with geomechanical modelling such that there is a feedback loop between monitored deformation and stress field modelling. We propose fully integrated seismic, geomechanical and reservoir modelling. Our mathematical formulation is based on fundamental set of force balance, mass balance, and constitutive poro-elastoplastic equations for two-phase media consisting of deformable solid rock frame and viscous fluid. We consider a simplified 1D modelling setup for consistent acoustic source and wave propagation in poro-elastoplastic media. In this formulation the seismic wave is generated due to local changes of the stress field and pore pressure induced by e.g. fault generation or strain localization. This approach gives unified framework to characterize microseismicity of both class-I (pressure induced) and class-II (stress triggered) type of events. We consider two modelling setups. In the first setup the event is located within the reservoir and associated with pressure/stress drop due to fracture initiation. In the second setup we assume that seismic wave from a distant source hits a reservoir. The unified formulation of poro-elastoplastic deformation allows us to link the macroscopic stresses to local seismic instability.
A two-stage approach to removing noise from recorded music
NASA Astrophysics Data System (ADS)
Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.
2004-05-01
A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
NASA Astrophysics Data System (ADS)
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.
Automatic streak endpoint localization from the cornerness metric
NASA Astrophysics Data System (ADS)
Sease, Brad; Flewelling, Brien; Black, Jonathan
2017-05-01
Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.
Intervening to change the diets of low-income women.
Davies, Jennifer A; Damani, P; Margetts, Barrie M
2009-05-01
Diet-related sources of ill health, including Fe-deficiency anaemia, are prevalent in the local South Asian population. This population also has a high prevalence of low-birth-weight babies. A need for preventative measures that take a holistic view to dietary change was identified in a South Asian community in Southampton, UK. A peer-led approach was used, training and developing a local workforce to become community food assistants. This workforce, drawn from local black and minority ethnic communities, ran practical 'hands-on' culturally-appropriate food-related activities within their communities that were successful in achieving long-term change in the diets of local women and their families. This model has the potential for achieving sustained behaviour change and is able to engage key target groups that can often be difficult to reach through more traditional routes.
Seismo-volcano source localization with triaxial broad-band seismic array
NASA Astrophysics Data System (ADS)
Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.
2011-10-01
Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.
Measurement of Fukushima Aerosol Debris in Sequim and Richland, WA and Ketchikan, AK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miley, Harry S.; Bowyer, Ted W.; Engelmann, Mark D.
2013-05-01
Aerosol collections were initiated at several locations by PNNL shortly after the Great East Japan Earthquake of May 2011. Aerosol samples were transferred to laboratory high-resolution gamma spectrometers for analysis. Similar to treaty monitoring stations operating across the Northern hemisphere, iodine and other isotopes which could be volatilized at high temperature were detected. Though these locations are not far apart, they have significant variations with respect to water, mountain-range placement, and local topography. Variation in computed source terms will be shown to bound the variability of this approach to source estimation.
Anderson, Jill T.; Perera, Nadeesha; Chowdhury, Bashira; Mitchell-Olds, Thomas
2015-01-01
Abiotic and biotic conditions often vary continuously across the landscape, imposing divergent selection on local populations. We used a provenance trial approach to examine microgeographic variation in local adaptation in Boechera stricta (Brassicaceae), a perennial forb native to the Rocky Mountains. In montane ecosystems, environmental conditions change considerably over short spatial scales, such that neighboring populations can be subject to different selective pressures. Using accessions from southern (Colorado) and northern (Idaho) populations, we characterized spatial variation in genetic similarity via microsatellite markers. We then transplanted genotypes from multiple local populations into common gardens in both regions. Continuous variation in local adaptation emerged for several components of fitness. In Idaho, genotypes from warmer environments (low elevation or south facing sites) were poorly adapted to the north-facing garden. In high and low elevation Colorado gardens, susceptibility to insect herbivory increased with source elevation. In the high elevation Colorado garden, germination success peaked for genotypes that evolved at similar elevations as the garden, and declined for genotypes from higher and lower elevations. We also found evidence for local maladaptation in survival and fecundity components of fitness in the low elevation Colorado garden. This approach is a necessary first step in predicting how global change could affect evolutionary dynamics. PMID:26656218
Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian
2018-05-16
Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Po-Lun; Gattiker, J. R.; Liu, Xiaohong
2013-06-27
A Gaussian process (GP) emulator is applied to quantify the contribution of local and remote emissions of black carbon (BC) on the BC concentrations in different regions using a Latin Hypercube sampling strategy for emission perturbations in the offline version of the Community Atmosphere Model Version 5.1 (CAM5) simulations. The source-receptor relationships are computed based on simulations constrained by a standard free-running CAM5 simulation and the ERA-Interim reanalysis product. The analysis demonstrates that the emulator is capable of retrieving the source-receptor relationships based on a small number of CAM5 simulations. Most regions are found susceptible to their local emissions. Themore » emulator also finds that the source-receptor relationships retrieved from the model-driven and the reanalysis-driven simulations are very similar, suggesting that the simulated circulation in CAM5 resembles the assimilated meteorology in ERA-Interim. The robustness of the results provides confidence for applying the emulator to detect dose-response signals in the climate system.« less
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea
Lee, Norman; Elias, Damian O.; Mason, Andrew C.
2009-01-01
Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794
McCartan, Julia; Palermo, Claire
2017-04-01
To explore how an Australian rural food policy coalition acts to influence a local food environment, focusing specifically on its composition, functions and processes as well as its food-related strategies and policy outputs. A qualitative case study approach was undertaken. Three sources were used to triangulate data: eleven semi-structured in-depth interviews with coalition members, analysis of thirty-seven documents relating to the coalition and observation at one coalition meeting. Data were analysed using a thematic and constant comparison approach. Community Coalition Action Theory provided a theoretical framework from which to interpret findings. Two rural local government areas on the south-eastern coast of Victoria, Australia. Eleven members of the food policy coalition. Five themes emerged from the data analysis. The themes described the coalition's leadership processes, membership structure, function to pool resources for food system advocacy, focus on collaborative cross-jurisdictional strategies and ability to influence policy change. This Australian case study demonstrates that with strong leadership, a small-sized core membership and focus on collaborative strategies, food policy coalitions may be a mechanism to positively influence local food environments.
Predicting poverty and wealth from mobile phone metadata.
Blumenstock, Joshua; Cadamuro, Gabriel; On, Robert
2015-11-27
Accurate and timely estimates of population characteristics are a critical input to social and economic research and policy. In industrialized economies, novel sources of data are enabling new approaches to demographic profiling, but in developing countries, fewer sources of big data exist. We show that an individual's past history of mobile phone use can be used to infer his or her socioeconomic status. Furthermore, we demonstrate that the predicted attributes of millions of individuals can, in turn, accurately reconstruct the distribution of wealth of an entire nation or to infer the asset distribution of microregions composed of just a few households. In resource-constrained environments where censuses and household surveys are rare, this approach creates an option for gathering localized and timely information at a fraction of the cost of traditional methods. Copyright © 2015, American Association for the Advancement of Science.
Wyczesany, Miroslaw; Ligeza, Tomasz S
2015-03-01
The locationist model of affect, which assumes separate brain structures devoted to particular discrete emotions, is currently being questioned as it has not received enough convincing experimental support. An alternative, constructionist approach suggests that our emotional states emerge from the interaction between brain functional networks, which are related to more general, continuous affective categories. In the study, we tested whether the three-dimensional model of affect based on valence, arousal, and dominance (VAD) can reflect brain activity in a more coherent way than the traditional locationist approach. Independent components of brain activity were derived from spontaneous EEG recordings and localized using the DIPFIT method. The correspondence between the spectral power of the revealed brain sources and a mood self-report quantified on the VAD space was analysed. Activation of four (out of nine) clusters of independent brain sources could be successfully explained by the specific combination of three VAD dimensions. The results support the constructionist theory of emotions.
NASA Technical Reports Server (NTRS)
Chang, Shih-Hung
1991-01-01
Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.
Statistical signatures of a targeted search by bacteria
NASA Astrophysics Data System (ADS)
Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve
2017-12-01
Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Local sources of black walnut recommended for planting in Maryland
Silas Little; Calvin F. Bey; Daniel McConaughy
1974-01-01
After 5 years, local black walnut seedlings were taller than those of 12 out-of-state sources in a Maryland planting. Seedlings from south-of-local sources out grew trees from northern sources. Genetic influence on height was expressed early--with little change in ranking of sources after the third year.
Transported vs. local contributions from secondary and biomass burning sources to PM2.5
NASA Astrophysics Data System (ADS)
Kim, Bong Mann; Seo, Jihoon; Kim, Jin Young; Lee, Ji Yi; Kim, Yumi
2016-11-01
The concentration of fine particulates in Seoul, Korea has been lowered over the past 10 years, as a result of the city's efforts in implementing environmental control measures. Yet, the particulate concentration level in Seoul remains high as compared to other urban areas globally. In order to further improve fine particulate air quality in the Korea region and design a more effective control strategy, enhanced understanding of the sources and contribution of fine particulates along with their chemical compositions is necessary. In turn, relative contributions from local and transported sources on Seoul need to be established, as this city is particularly influenced by sources from upwind geographic areas. In this study, PM2.5 monitoring was conducted in Seoul from October 2012 to September 2013. PM2.5 mass concentrations, ions, metals, organic carbon (OC), elemental carbon (EC), water soluble OC (WSOC), humic-like substances of carbon (HULIS-C), and 85 organic compounds were chemically analyzed. The multivariate receptor model SMP was applied to the PM2.5 data, which then identified nine sources and estimated their source compositions as well as source contributions. Prior studies have identified and quantified the transported and local sources. However, no prior studies have distinguished contributions of an individual source between transported contribution and locally produced contribution. We differentiated transported secondary and biomass burning sources from the locally produced secondary and biomass burning sources, which was supported with potential source contribution function (PSCF) analysis. Of the total secondary source contribution, 32% was attributed to transported secondary sources, and 68% was attributed to locally formed secondary sources. Meanwhile, the contribution from the transported biomass burning source was revealed as 59% of the total biomass burning contribution, which was 1.5 times higher than that of the local biomass burning source. Four-season average source contributions from the transported and the local sources were 28% and 72%, respectively.
Wicki, Melanie; Karabulut, Fatma; Auckenthaler, Adrian; Felleisen, Richard; Tanner, Marcel; Baumgartner, Andreas
2011-12-01
The localization of fecal input sites is important for water quality management. For this purpose, we have developed a new approach based on a three-step procedure, including a preparatory phase, the screening of multiresistant bacteria using selective agar plates, and a typing phase where selected Escherichia coli isolates are characterized by antibiotic resistance profiles and molecular fingerprinting techniques (pulsed-field gel electrophoresis [PFGE]). These two well-known source tracking methods were combined in order to reduce cost and effort. This approach was successfully applied under field conditions in a study area located in the north-western part of Switzerland. E. coli isolates from spring water and surface water samples collected in this area were screened with selective agar plates. In this way, 21 different groups, each consisting of strains with the same pattern of antibiotic resistance, were found. Of these, four groups were further analyzed using PFGE. Strains with identical PFGE profiles were detected repeatedly, demonstrating the suitability of this method for the localization of fecal input sites over an extended period of time. Identical PFGE patterns of strains detected in water from two different springs were also found in the stream flowing through the study area. These results demonstrated the applicability of the new approach for the examination of incidents of fecal contamination in drinking water. The advantages of the described approach over genotyping methods currently being used to identify sources of fecal contaminants are a reduction in time, costs, and the effort required. Identical isolates could be identified without the construction of large libraries.
Felix, J David; Elliott, Emily M; Gish, Timothy J; McConnell, Laura L; Shaw, Stephanie L
2013-10-30
Ammonia (NH3) emissions are a substantial source of nitrogen pollution to sensitive terrestrial, aquatic, and marine ecosystems and dependable quantification of NH3 sources is of growing importance due to recently observed increases in ammonium (NH4(+)) deposition rates. While determination of the nitrogen isotopic composition of NH3 (δ(15)N-NH3) can aid in the quantification of NH3 emission sources, existing methods have precluded a comprehensive assessment of δ(15)N-NH3 values from major emission sources. We report an approach for the δ(15)N-NH4(+) analysis of low concentration NH4(+) samples that couples the bromate oxidation of NH4(+) to NO2(-) and the microbial denitrifier method for δ(15)N-NO2(-) analysis. This approach reduces the required sample mass by 50-fold relative to standard elemental analysis (EA) procedures, is capable of high throughput, and eliminates toxic chemicals used in a prior method for the analysis of low concentration samples. Using this approach, we report a comprehensive inventory of δ(15)N-NH3 values from major emission sources (including livestock operations, marine sources, vehicles, fertilized cornfields) collected using passive sampling devices. The δ(15)N-NH4(+) analysis approach developed has a standard deviation of ±0.7‰ and was used to analyze passively collected NH3 emissions with a wide range of ambient NH3 concentrations (0.2 to 165.6 µg/m(3)). The δ(15)N-NH3 values reveal that the NH3 emitted from volatilized livestock waste and fertilizer has relatively low δ(15)N values (-56 to -23‰), allowing it to be differentiated from NH3 emitted from fossil fuel sources that are characterized by relatively high δ(15)N values (-15 to +2‰). The isotopic source signatures presented in this emission inventory can be used as an additional tool in identifying NH3 emission sources and tracing their transport across localized landscapes and regions. The insight into the transport of NH3 emissions provided by isotopic investigation is an important step in devising strategies to reduce future NH3 emissions, a mounting concern for air quality scientists, epidemiologists, and policy-makers. Copyright © 2013 John Wiley & Sons, Ltd.
Perceived Sources of Stress and Resilience in Men in an African-American Community
Chung, Bowen; Meldrum, Marcia; Jones, Felica; Brown, Anthony; Daaood, Rasudaan; Jones, Loretta
2014-01-01
Background Little is known about the perceived causes of stress and what strategies African-American men use to promote resiliency. Participatory research approaches are recommended as an approach to engage minority communities. A key goal of participatory research is to shift the locus of control to community partners. Objective To understand perceived sources of stress and tools used to promote resiliency in African American men in South Los Angeles. Methods Our study utilized a community-partnered participatory research (CPPR) approach to collect and analyze open-ended responses from 295 African American men recruited at a local, cultural festival in Los Angeles using thematic analysis and the Levels of Racism framework. Results Almost all (93.2%) men reported stress. Of those reporting stress, 60.8% reported finances and money and 43.2% reported racism as a specific cause. Over 60% (63.4%) reported that they perceived available sources of help to deal with stress. Of those noting a specific source of help for stress (n=76), 42.1% identified religious faith. Almost all of the participants (92.1%) mentioned specific sources of resiliency such as religion and family. Conclusions Stress due to psycho-social factors such as finances and racism are common in African American men. But at the same time, most men found support for resiliency to ameliorate stress in religion and family. Future work to engage African-American men around alleviating stress and supporting resiliency should both take into account the perceived causes of stress and incorporate culturally appropriate sources of resiliency support. PMID:25727976
Climate Change and Political Action: the Citizens' Climate Lobby
NASA Astrophysics Data System (ADS)
Nelson, P. H.; Secord, S.
2014-12-01
Recognizing the reality of global warming and its origin in greenhouse gas emissions, what does one do about it? Individual action is commendable, but inadequate. Collective action is necessary--Citizens' Climate Lobby proposes a "fee-and-dividend" approach in which a fee is imposed on carbon-based fuel at its sources of production. The fee increases annually in a predictable manner. The funds collected are paid out to consumers as monthly dividends. The approach is market-based, in that the cost of the fee to producers is passed on to consumers in the cost of carbon-based fuels. Downstream energy providers and consumers then make their choices regarding investments and purchases. Citizens' Climate Lobby (CCL) builds national consensus by growing local Chapters, led and populated by volunteers. The Chapters are charged with public education and presenting the fee-and-dividend proposal to their respective Representatives and Senators. CCL builds trust by its non-partisan approach, meeting with all members of Congress regardless of party affiliation and stance on climate-related issues. CCL also builds trust by a non-confrontational approach, seeking to understand rather than to oppose. CCL works both locally, through its local Chapters, and nationally, with an annual conference in Washington DC during which all Congressional offices are visited. CCL recognizes that a long-term, sustained effort is necessary to address climate change.
Seismic noise frequency dependent P and S wave sources
NASA Astrophysics Data System (ADS)
Stutzmann, E.; Schimmel, M.; Gualtieri, L.; Farra, V.; Ardhuin, F.
2013-12-01
Seismic noise in the period band 3-10 sec is generated in the oceans by the interaction of ocean waves. Noise signal is dominated by Rayleigh waves but body waves can be extracted using a beamforming approach. We select the TAPAS array deployed in South Spain between June 2008 and September 2009 and we use the vertical and horizontal components to extract noise P and S waves, respectively. Data are filtered in narrow frequency bands and we select beam azimuths and slownesses that correspond to the largest continuous sources per day. Our procedure automatically discard earthquakes which are localized during short time durations. Using this approach, we detect many more noise P-waves than S-waves. Source locations are determined by back-projecting the detected slowness/azimuth. P and S waves are generated in nearby areas and both source locations are frequency dependent. Long period sources are dominantly in the South Atlantic and Indian Ocean whereas shorter period sources are rather in the North Atlantic Ocean. We further show that the detected S-waves are dominantly Sv-waves. We model the observed body waves using an ocean wave model that takes into account all possible wave interactions including coastal reflection. We use the wave model to separate direct and multiply reflected phases for P and S waves respectively. We show that in the South Atlantic the complex source pattern can be explained by the existence of both coastal and pelagic sources whereas in the North Atlantic most body wave sources are pelagic. For each detected source, we determine the equivalent source magnitude which is compared to the model.
NASA Astrophysics Data System (ADS)
Bout-Roumazeilles, V.; Riboulleau, A.; du Châtelet, E. Armynot; Lorenzoni, L.; Tribovillard, N.; Murray, R. W.; Müller-Karger, F.; Astor, Y. M.
2013-02-01
The mineralogical composition of 95 surface sediment samples from the Cariaco Basin continental shelf and Orinoco delta was investigated in order to constrain the clay-mineral main provenance and distribution within the Cariaco Basin. The spatial variability of the data set was studied using a geo-statistical approach that allows drawing representative clay-mineral distribution maps. These maps are used to identify present-day dominant sources for each clay-mineral species in agreement with the geological characteristics of the main river watersheds emptying into the basin. This approach allows (1) identifying the most distinctive clay-mineral species/ratios that determine particle provenance, (2) evaluating the respective contribution of local rivers, and (3) confirming the minimal present-day influence of the Orinoco plume on the Cariaco Basin sedimentation. The Tuy, Unare, and Neveri Rivers are the main sources of clay particles to the Cariaco Basin sedimentation. At present, the Tuy River is the main contributor of illite to the western part of the southern Cariaco Basin continental shelf. The Unare River plume, carrying smectite and kaolinite, has a wide westward propagation, whereas the Neveri River contribution is less extended, providing kaolinite and illite toward the eastern Cariaco Basin. The Manzanares, Araya, Tortuga, and Margarita areas are secondary sources of local influence. These insights shed light on the origin of present-day terrigenous sediments of the Cariaco Basin and help to propose alternative explanations for the temporal variability of clay mineralogy observed in previously published studies.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Large dynamic range terahertz spectrometers based on plasmonic photomixers (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wang, Ning; Javadi, Hamid; Jarrahi, Mona
2017-02-01
Heterodyne terahertz spectrometers are highly in demand for space explorations and astrophysics studies. A conventional heterodyne terahertz spectrometer consists of a terahertz mixer that mixes a received terahertz signal with a local oscillator signal to generate an intermediate frequency signal in the radio frequency (RF) range, where it can be easily processed and detected by RF electronics. Schottky diode mixers, superconductor-insulator-superconductor (SIS) mixers and hot electron bolometer (HEB) mixers are the most commonly used mixers in conventional heterodyne terahertz spectrometers. While conventional heterodyne terahertz spectrometers offer high spectral resolution and high detection sensitivity levels at cryogenic temperatures, their dynamic range and bandwidth are limited by the low radiation power of existing terahertz local oscillators and narrow bandwidth of existing terahertz mixers. To address these limitations, we present a novel approach for heterodyne terahertz spectrometry based on plasmonic photomixing. The presented design replaces terahertz mixer and local oscillator of conventional heterodyne terahertz spectrometers with a plasmonic photomixer pumped by an optical local oscillator. The optical local oscillator consists of two wavelength-tunable continuous-wave optical sources with a terahertz frequency difference. As a result, the spectrometry bandwidth and dynamic range of the presented heterodyne spectrometer is not limited by radiation frequency and power restrictions of conventional terahertz sources. We demonstrate a proof-of-concept terahertz spectrometer with more than 90 dB dynamic range and 1 THz spectrometry bandwidth.
Geographical limits to species-range shifts are suggested by climate velocity.
Burrows, Michael T; Schoeman, David S; Richardson, Anthony J; Molinos, Jorge García; Hoffmann, Ary; Buckley, Lauren B; Moore, Pippa J; Brown, Christopher J; Bruno, John F; Duarte, Carlos M; Halpern, Benjamin S; Hoegh-Guldberg, Ove; Kappel, Carrie V; Kiessling, Wolfgang; O'Connor, Mary I; Pandolfi, John M; Parmesan, Camille; Sydeman, William J; Ferrier, Simon; Williams, Kristen J; Poloczanska, Elvira S
2014-03-27
The reorganization of patterns of species diversity driven by anthropogenic climate change, and the consequences for humans, are not yet fully understood or appreciated. Nevertheless, changes in climate conditions are useful for predicting shifts in species distributions at global and local scales. Here we use the velocity of climate change to derive spatial trajectories for climatic niches from 1960 to 2009 (ref. 7) and from 2006 to 2100, and use the properties of these trajectories to infer changes in species distributions. Coastlines act as barriers and locally cooler areas act as attractors for trajectories, creating source and sink areas for local climatic conditions. Climate source areas indicate where locally novel conditions are not connected to areas where similar climates previously occurred, and are thereby inaccessible to climate migrants tracking isotherms: 16% of global surface area for 1960 to 2009, and 34% of ocean for the 'business as usual' climate scenario (representative concentration pathway (RCP) 8.5) representing continued use of fossil fuels without mitigation. Climate sink areas are where climate conditions locally disappear, potentially blocking the movement of climate migrants. Sink areas comprise 1.0% of ocean area and 3.6% of land and are prevalent on coasts and high ground. Using this approach to infer shifts in species distributions gives global and regional maps of the expected direction and rate of shifts of climate migrants, and suggests areas of potential loss of species richness.
Olowoporoku, Dotun; Hayes, Enda; Longhurst, James; Parkhurst, Graham
2012-06-30
Regardless of its intent and purposes, the first decade of the Local Air Quality Management (LAQM) framework had little or no effect in reducing traffic-related air pollution in the UK. Apart from the impact of increased traffic volumes, the major factor attributed to this failure is that of policy disconnect between the process of diagnosing air pollution and its management, thereby limiting the capability of local authorities to control traffic-related sources of air pollution. Integrating air quality management into the Local Transport Plan (LTP) process therefore presents opportunities for enabling political will, funding and joined-up policy approach to reduce this limitation. However, despite the increased access to resources for air quality measures within the LTP process, there are local institutional, political and funding constraints which reduce the impact of these policy interventions on air quality management. This paper illustrate the policy implementation gaps between central government policy intentions and the local government process by providing evidence of the deprioritisation of air quality management compared to the other shared priorities in the LTP process. We draw conclusions on the policy and practice of integrating air quality management into transport planning. The evidence thereby indicate the need for a policy shift from a solely localised hotspot management approach, in which the LAQM framework operates, to a more holistic management of vehicular emissions within wider spatial administrative areas. Copyright © 2012 Elsevier Ltd. All rights reserved.
Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.
2017-01-01
Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579
The quandary of local people—Park relations in Nepal's Royal Chitwan National Park
NASA Astrophysics Data System (ADS)
Nepal, Sanjay K.; Weber, Karl E.
1995-11-01
This paper analyzes five major causes of park-people conflicts that have occurred in Nepal's Royal Chitwan National Park. The causes include illegal transactions of forest products from the park, livestock grazing in the park, illegal hunting and fishing, crop damage, and threats to human and animal life caused by wild animals from the park. The conflicts indicate a reciprocal relationship between the park and local people. They reflect the attitudes of local people and representatives of the park authority whose priorities and objectives largely diverge. The results show that people settled adjacent to the park are heavily dependent on its resources. Even in places where some, albeit few alternative sources exist, local people continue to trespass the park boundary as these sources are inadequate to ensure the fulfillment of local people's resource needs. Illegal transactions of resources continue throughout the year; however, they are less intense during summer due to flooding caused by the Rapti River, which forms the park boundary towards the northern section where this study is conducted. The frequency of local people's visits to the park is mainly determined by their age, distance between homesteads and park, and volume of crop loss caused by wild animals. Crop damage is the function of size of landholding, distance, and frequency of crop raid. Local people claim that they have no intention of letting their livestock graze in the park; however, the dense vegetation of the park attracts livestock grazing on riverbanks just outside the open park boundary. Many head of livestock are killed by carnivores of the park. Human casualties are mainly caused by sloth bear ( Melursus ursinus), tiger ( Panthera tigris), wild pig ( Sug scrofa), and rhinoceros ( Rhinoceros unicornis). There had been some earlier attempts to reconcile the conflicts by offering local people different kinds of compensations; however, these were unsuccessful measures. An integrated approach is essential if efforts to resolve the park-people conflicts are to succeed. The government is in the process of launching a project that aims to resolve the inherent problems with such an approach. Suggestions are made to incorporate some key elements, such as maintaining effective communication between various parties and the potential for wildlife conservation among local people.
"Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado
NASA Astrophysics Data System (ADS)
DeMets, C. M.
2012-12-01
Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments for quantitatively determining a generalized answer, it is "currently impossible to state categorically whether or not local food systems emit fewer greenhouse gases than non-local food systems" (Edwards-Jones et al, 2008). Even so, numerous researchers have shown that "83 percent of emissions occur before food even leaves the farm gate" (Weber and Matthews, Garnett, cited in DeWeerdt, 2011); while this doesn't provide any information in terms of local vs. non-local, it is significant when viewed in light of the fact that local farmers tend to have much greater transparency and accountability in their agricultural practices. In other words, "a farmer who sells in the local food economy might be more likely to adopt or continue sustainable practices in order to meet…customer demand" (DeWeerdt, 2011), among other reasons such as environmental concern and desire to support the local economy (DeWeerdt, 2009). In identifying solutions to barriers to locally sourcing food, this research will enable restaurants to overcome these barriers and source their food locally, thereby supporting farmers and their ability to maintain sustainable practices.
Liu, Xiaobo
2015-07-01
The U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES) is required by the EPA to replace Mobile 6 as an official on-road emission model. Incorporated with annual vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) vehicle class, MOVES allocates VMT from HPMS to MOVES source (vehicle) types and calculates emission burden by MOVES source type. However, the calculated running emission burden by MOVES source type may be deviated from the actual emission burden because of MOVES source population, specifically the population fraction by MOVES source type in HPMS vehicle class. The deviation is also the result of the use of the universal set of parameters, i.e., relative mileage accumulation rate (relativeMAR), packaged in MOVES default database. This paper presents a novel approach by adjusting the relativeMAR to eliminate the impact of MOVES source population on running exhaust emission and to keep start and evaporative emissions unchanged for both MOVES2010b and MOVES2014. Results from MOVES runs using this approach indicated significant improvements on VMT distribution and emission burden estimation for each MOVES source type. The deviation of VMT by MOVES source type is minimized by using this approach from 12% to less than 0.05% for MOVES2010b and from 50% to less than 0.2% for MOVES2014 except for MOVES source type 53. Source type 53 still remains about 30% variation. The improvement of VMT distribution results in the elimination of emission burden deviation for each MOVES source type. For MOVES2010b, the deviation of emission burdens decreases from -12% for particulate matter less than 2.5 μm (PM2.5) and -9% for carbon monoxide (CO) to less than 0.002%. For MOVES2014, it drops from 80% for CO and 97% for PM2.5 to 0.006%. This approach is developed to more accurately estimate the total emission burdens using EPA's MOVES, both MOVES2010b and MOVES2014, by redistributing vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) class to MOVES source type on the basis of comprehensive traffic study, local link-by-link VMT broken down into MOVES source type.
A generalized formulation for noise-based seismic velocity change measurements
NASA Astrophysics Data System (ADS)
Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.
2017-12-01
The observation of continuous seismic velocity changes is a powerful tool for detecting seasonal variations in crustal structure, volcanic unrest, co- and post-seismic evolution of stress in fault areas or the effects of fluid injection. The standard approach for measuring such velocity changes relies on comparison of travel times in the coda of a set of seismic signals, usually noise-based cross-correlations retrieved at different dates, and a reference trace, usually a averaged function over dates. A good stability in both space and time of the noise sources is then the main assumption for reliable measurements. Unfortunately, these conditions are often not fulfilled, as it happens when ambient-noise sources are non-stationary, such as the emissions of low-frequency volcanic tremors.We propose a generalized formulation for retrieving continuous time series of noise-based seismic velocity changes without any arbitrary reference cross-correlation function. We set up a general framework for future applications of this technique performing synthetic tests. In particular, we study the reliability of the retrieved velocity changes in case of seasonal-type trends, transient effects (similar to those produced as a result of an earthquake or a volcanic eruption) and sudden velocity drops and recoveries as the effects of transient local source emissions. Finally, we apply this approach to a real dataset of noise cross-correlations. We choose the Klyuchevskoy volcanic group (Kamchatka) as a case study where the recorded wavefield is hampered by loss of data and dominated by strongly localized volcanic tremor sources. Despite the mentioned wavefield contaminations, we retrieve clear seismic velocity drops associated with the eruptions of the Klyuchevskoy an the Tolbachik volcanoes in 2010 and 2012, respectively.
Bachiller, Alejandro; Romero, Sergio; Molina, Vicente; Alonso, Joan F; Mañanas, Miguel A; Poza, Jesús; Hornero, Roberto
2015-12-01
The present study investigates the neural substrates underlying cognitive processing in schizophrenia (Sz) patients. To this end, an auditory 3-stimulus oddball paradigm was used to identify P3a and P3b components, elicited by rare-distractor and rare-target tones, respectively. Event-related potentials (ERP) were recorded from 31 Sz patients and 38 healthy controls. The P3a and P3b brain-source generators were identified by time-averaging of low-resolution brain electromagnetic tomography (LORETA) current density images. In contrast with the commonly used fixed window of interest (WOI), we proposed to apply an adaptive WOI, which takes into account subjects' P300 latency variability. Our results showed different P3a and P3b source activation patterns in both groups. P3b sources included frontal, parietal and limbic lobes, whereas P3a response generators were localized over bilateral frontal and superior temporal regions. These areas have been related to the discrimination of auditory stimulus and to the inhibition (P3a) or the initiation (P3b) of motor response in a cognitive task. In addition, differences in source localization between Sz and control groups were observed. Sz patients showed lower P3b source activity in bilateral frontal structures and the cingulate. P3a generators were less widespread for Sz patients than for controls in right superior, medial and middle frontal gyrus. Our findings suggest that target and distractor processing involves distinct attentional subsystems, both being altered in Sz. Hence, the study of neuroelectric brain information can provide further insights to understand cognitive processes and underlying mechanisms in Sz. Copyright © 2015 Elsevier B.V. All rights reserved.
Facilitating Follow-up of LIGO–Virgo Events Using Rapid Sky Localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Yu; Holz, Daniel E.
We discuss an algorithm for accurate and very low-latency (<1 s) localization of gravitational-wave (GW) sources using only the relative times of arrival, relative phases, and relative signal-to-noise ratios for pairs of detectors. The algorithm is independent of distances and masses to leading order, and can be generalized to all discrete (as opposed to stochastic and continuous) sources detected by ground-based detector networks. Our approach is similar to that of BAYESTAR with a few modifications, which result in increased computational efficiency. For the LIGO two-detector configuration (Hanford+Livingston) operating in O1 we find a median 50% (90%) localization of 143 deg{supmore » 2} (558 deg{sup 2}) for binary neutron stars. We use our algorithm to explore the improvement in localization resulting from loud events, finding that the loudest out of the first 4 (or 10) events reduces the median sky-localization area by a factor of 1.9 (3.0) for the case of two GW detectors, and 2.2 (4.0) for three detectors. We also consider the case of multi-messenger joint detections in both the gravitational and the electromagnetic radiation, and show that joint localization can offer significant improvements (e.g., in the case of LIGO and Fermi /GBM joint detections). We show that a prior on the binary inclination, potentially arising from GRB observations, has a negligible effect on GW localization. Our algorithm is simple, fast, and accurate, and may be of particular utility in the development of multi-messenger astronomy.« less
45 CFR 2551.92 - What are project funding requirements?
Code of Federal Regulations, 2010 CFR
2010-10-01
... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... local funding sources during the first three years of operations; (ii) An economic downturn, the...
45 CFR 2552.92 - What are project funding requirements?
Code of Federal Regulations, 2010 CFR
2010-10-01
... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... the development of local funding sources during the first three years of operations; or (ii) An...
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Terrain clutter simulation using physics-based scattering model and digital terrain profile data
NASA Astrophysics Data System (ADS)
Park, James; Johnson, Joel T.; Ding, Kung-Hau; Kim, Kristopher; Tenbarge, Joseph
2015-05-01
Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.
Communal Cooperation in Sensor Networks for Situation Management
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin,Chunsheng
2006-01-01
Situation management is a rapidly evolving science where managed sources are processed as realtime streams of events and fused in a way that maximizes comprehension, thus enabling better decisions for action. Sensor networks provide a new technology that promises ubiquitous input and action throughout an environment, which can substantially improve information available to the process. Here we describe a NASA program that requires improvements in sensor networks and situation management. We present an approach for massively deployed sensor networks that does not rely on centralized control but is founded in lessons learned from the way biological ecosystems are organized. In this approach, fully distributed data aggregation and integration can be performed in a scalable fashion where individual motes operate based on local information, making local decisions that achieve globally-meaningful results. This exemplifies the robust, fault-tolerant infrastructure required for successful situation management systems.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
Holmes, Charles B.; Sikazwe, Izukanji; Raelly, Roselyne; Freeman, Bethany; Wambulawae, Inonge; Silwizya, Geoffrey; Topp, Stephanie; Chilengi, Roma; Henostroza, German; Kapambwe, Sharon; Simbeye, Darius; Sibajene, Sheila; Chi, Harmony; Godfrey, Katy; Chi, Benjamin; Moore, Carolyn Bolton
2014-01-01
Multiple funding sources provide research and program implementation organizations a broader base of funding and facilitate synergy, but also entail challenges that include varying stakeholder expectations, unaligned grant cycles, and highly variable reporting requirements. Strong governance and strategic planning are essential to ensure alignment of goals and agendas. Systems to track budgets and outputs as well as procurement and human resources are required. A major goal is to transition leadership and operations to local ownership. This article details successful approaches used by the newly independent non-governmental organization, the Centre for Infectious Disease Research in Zambia (CIDRZ). PMID:24321983
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Holmes, Charles B; Sikazwe, Izukanji; Raelly, Roselyne L; Freeman, Bethany L; Wambulawae, Inonge; Silwizya, Geoffrey; Topp, Stephanie M; Chilengi, Roma; Henostroza, German; Kapambwe, Sharon; Simbeye, Darius; Sibajene, Sheila; Chi, Harmony; Godfrey, Katy; Chi, Benjamin; Moore, Carolyn Bolton
2014-01-01
Multiple funding sources provide research and program implementation organizations a broader base of funding and facilitate synergy, but also entail challenges that include varying stakeholder expectations, unaligned grant cycles, and highly variable reporting requirements. Strong governance and strategic planning are essential to ensure alignment of goals and agendas. Systems to track budgets and outputs, as well as procurement and human resources are required. A major goal of funders is to transition leadership and operations to local ownership. This article details successful approaches used by the newly independent nongovernmental organization, the Centre for Infectious Disease Research in Zambia.
Quantum transport through disordered 1D wires: Conductance via localized and delocalized electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopar, Víctor A.
Coherent electronic transport through disordered systems, like quantum wires, is a topic of fundamental and practical interest. In particular, the exponential localization of electron wave functions-Anderson localization-due to the presence of disorder has been widely studied. In fact, Anderson localization, is not an phenomenon exclusive to electrons but it has been observed in microwave and acoustic experiments, photonic materials, cold atoms, etc. Nowadays, many properties of electronic transport of quantum wires have been successfully described within a scaling approach to Anderson localization. On the other hand, anomalous localization or delocalization is, in relation to the Anderson problem, a less studiedmore » phenomenon. Although one can find signatures of anomalous localization in very different systems in nature. In the problem of electronic transport, a source of delocalization may come from symmetries present in the system and particular disorder configurations, like the so-called Lévy-type disorder. We have developed a theoretical model to describe the statistical properties of transport when electron wave functions are delocalized. In particular, we show that only two physical parameters determine the complete conductance distribution.« less
Electric Field Encephalography as a tool for functional brain research: a modeling study.
Petrov, Yury; Sridhar, Srinivas
2013-01-01
We introduce the notion of Electric Field Encephalography (EFEG) based on measuring electric fields of the brain and demonstrate, using computer modeling, that given the appropriate electric field sensors this technique may have significant advantages over the current EEG technique. Unlike EEG, EFEG can be used to measure brain activity in a contactless and reference-free manner at significant distances from the head surface. Principal component analysis using simulated cortical sources demonstrated that electric field sensors positioned 3 cm away from the scalp and characterized by the same signal-to-noise ratio as EEG sensors provided the same number of uncorrelated signals as scalp EEG. When positioned on the scalp, EFEG sensors provided 2-3 times more uncorrelated signals. This significant increase in the number of uncorrelated signals can be used for more accurate assessment of brain states for non-invasive brain-computer interfaces and neurofeedback applications. It also may lead to major improvements in source localization precision. Source localization simulations for the spherical and Boundary Element Method (BEM) head models demonstrated that the localization errors are reduced two-fold when using electric fields instead of electric potentials. We have identified several techniques that could be adapted for the measurement of the electric field vector required for EFEG and anticipate that this study will stimulate new experimental approaches to utilize this new tool for functional brain research.
Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.
Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A
2016-08-01
Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
A comprehensive approach to reactive power scheduling in restructured power systems
NASA Astrophysics Data System (ADS)
Shukla, Meera
Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.
Three-Dimensional Photoactivated Localization Microscopy with Genetically Expressed Probes
Temprine, Kelsey; York, Andrew G.; Shroff, Hari
2017-01-01
Photoactivated localization microscopy (PALM) and related single-molecule imaging techniques enable biological image acquisition at ~20 nm lateral and ~50–100 nm axial resolution. Although such techniques were originally demonstrated on single imaging planes close to the coverslip surface, recent technical developments now enable the 3D imaging of whole fixed cells. We describe methods for converting a 2D PALM into a system capable of acquiring such 3D images, with a particular emphasis on instrumentation that is compatible with choosing relatively dim, genetically expressed photoactivatable fluorescent proteins (PA-FPs) as PALM probes. After reviewing the basics of 2D PALM, we detail astigmatic and multiphoton imaging approaches well suited to working with PA-FPs. We also discuss the use of open-source localization software appropriate for 3D PALM. PMID:25391803
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilson, James R.; McQueen, Tyrel M.
With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less
Neilson, James R.; McQueen, Tyrel M.
2015-09-20
With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less
Wardrop, N A; Jochem, W C; Bird, T J; Chamberlain, H R; Clarke, D; Kerr, D; Bengtsson, L; Juran, S; Seaman, V; Tatem, A J
2018-04-03
Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. Copyright © 2018 the Author(s). Published by PNAS.
Karanth, Kota Ullas; Gopalaswamy, Arjun M.; Kumar, Narayanarao Samba; Vaidyanathan, Srinivas; Nichols, James D.; MacKenzie, Darryl I.
2011-01-01
1. Assessing spatial distributions of threatened large carnivores at landscape scales poses formidable challenges because of their rarity and elusiveness. As a consequence of logistical constraints, investigators typically rely on sign surveys. Most survey methods, however, do not explicitly address the central problem of imperfect detections of animal signs in the field, leading to underestimates of true habitat occupancy and distribution. 2. We assessed habitat occupancy for a tiger Panthera tigris metapopulation across a c. 38 000-km2 landscape in India, employing a spatially replicated survey to explicitly address imperfect detections. Ecological predictions about tiger presence were confronted with sign detection data generated from occupancy sampling of 205 sites, each of 188 km2. 3. A recent occupancy model that considers Markovian dependency among sign detections on spatial replicates performed better than the standard occupancy model (ΔAIC = 184·9). A formulation of this model that fitted the data best showed that density of ungulate prey and levels of human disturbance were key determinants of local tiger presence. Model averaging resulted in a replicate-level detection probability [inline image] = 0·17 (0·17) for signs and a tiger habitat occupancy estimate of [inline image] = 0·665 (0·0857) or 14 076 (1814) km2 of potential habitat of 21 167 km2. In contrast, a traditional presence-versus-absence approach underestimated occupancy by 47%. Maps of probabilities of local site occupancy clearly identified tiger source populations at higher densities and matched observed tiger density variations, suggesting their potential utility for population assessments at landscape scales. 4. Synthesis and applications. Landscape-scale sign surveys can efficiently assess large carnivore spatial distributions and elucidate the factors governing their local presence, provided ecological and observation processes are both explicitly modelled. Occupancy sampling using spatial replicates can be used to reliably and efficiently identify tiger population sources and help monitor metapopulations. Our results reinforce earlier findings that prey depletion and human disturbance are key drivers of local tiger extinctions and tigers can persist even in human-dominated landscapes through effective protection of source populations. Our approach facilitates efficient targeting of tiger conservation interventions and, more generally, provides a basis for the reliable integration of large carnivore monitoring data between local and landscape scales.
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
Tropospheric ozone using an emission tagging technique in the CAM-Chem and WRF-Chem models
NASA Astrophysics Data System (ADS)
Lupascu, A.; Coates, J.; Zhu, S.; Butler, T. M.
2017-12-01
Tropospheric ozone is a short-lived climate forcing pollutant. High concentration of ozone can affect human health (cardiorespiratory and increased mortality due to long-term exposure), and also it damages crops. Attributing ozone concentrations to the contributions from different sources would indicate the effects of locally emitted or transported precursors on ozone levels in specific regions. This information could be used as an important component of the design of emissions reduction strategies by indicating which emission sources could be targeted for effective reductions, thus reducing the burden of ozone pollution. Using a "tagging" approach within the CAM-Chem (global) and WRF-Chem (regional) models, we can quantify the contribution of individual emission of NOx and VOC precursors on air quality. Hence, when precursor emissions of NOx are tagged, we have seen that the largest contributors on ozone levels are the anthropogenic sources, while in the case of precursor emissions of VOCs, the biogenic sources and methane account for more than 50% of ozone levels. Further, we have extended the NOx tagging method in order to investigate continental source region contributions to concentrations of ozone over various receptor regions over the globe, with a zoom over Europe. In general, summertime maximum ozone in most receptor regions is largely attributable to local emissions of anthropogenic NOx and biogenic VOC. During the rest of the year, especially during springtime, ozone in most receptor regions shows stronger influences from anthropogenic emissions of NOx and VOC in remote source regions.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
A stress ecology framework for comprehensive risk assessment of diffuse pollution.
van Straalen, Nico M; van Gestel, Cornelis A M
2008-12-01
Environmental pollution is traditionally classified as either localized or diffuse. Local pollution comes from a point source that emits a well-defined cocktail of chemicals, distributed in the environment in the form of a gradient around the source. Diffuse pollution comes from many sources, small and large, that cause an erratic distribution of chemicals, interacting with those from other sources into a complex mixture of low to moderate concentrations over a large area. There is no good method for ecological risk assessment of such types of pollution. We argue that effects of diffuse contamination in the field must be analysed in the wider framework of stress ecology. A multivariate approach can be applied to filter effects of contaminants from the many interacting factors at the ecosystem level. Four case studies are discussed (1) functional and structural properties of terrestrial model ecosystems, (2) physiological profiles of microbial communities, (3) detritivores in reedfield litter, and (4) benthic invertebrates in canal sediment. In each of these cases the data were analysed by multivariate statistics and associations between ecological variables and the levels of contamination were established. We argue that the stress ecology framework is an appropriate assessment instrument for discriminating effects of pollution from other anthropogenic disturbances and naturally varying factors.
Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong
2016-06-06
We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.
Soneja, Sutyajeet I; Tielsch, James M; Khatry, Subarna K; Curriero, Frank C; Breysse, Patrick N
2016-03-01
Black carbon (BC) is a major contributor to hydrological cycle change and glacial retreat within the Indo-Gangetic Plain (IGP) and surrounding region. However, significant variability exists for estimates of BC regional concentration. Existing inventories within the IGP suffer from limited representation of rural sources, reliance on idealized point source estimates (e.g., utilization of emission factors or fuel-use estimates for cooking along with demographic information), and difficulty in distinguishing sources. Inventory development utilizes two approaches, termed top down and bottom up, which rely on various sources including transport models, emission factors, and remote sensing applications. Large discrepancies exist for BC source attribution throughout the IGP depending on the approach utilized. Cooking with biomass fuels, a major contributor to BC production has great source apportionment variability. Areas requiring attention tied to research of cookstove and biomass fuel use that have been recognized to improve emission inventory estimates include emission factors, particulate matter speciation, and better quantification of regional/economic sectors. However, limited attention has been given towards understanding ambient small-scale spatial variation of BC between cooking and non-cooking periods in low-resource environments. Understanding the indoor to outdoor relationship of BC emissions due to cooking at a local level is a top priority to improve emission inventories as many health and climate applications rely upon utilization of accurate emission inventories.
Model-Free Stochastic Localization of CBRN Releases
2013-01-01
Ioannis Ch. Paschalidis,‡ Senior Member, IEEE Abstract—We present a novel two-stage methodology for locating a Chemical, Biological, Radiological, or...Nuclear (CBRN) source in an urban area using a network of sensors. In contrast to earlier work, our approach does not solve an inverse dispersion problem...but relies on data obtained from a simulation of the CBRN dispersion to obtain probabilistic descriptors of sensor measurements under a variety of CBRN
Hybrid Air Quality Modeling Approach For Use in the Near ...
The Near-road EXposures to Urban air pollutant Study (NEXUS) investigated whether children with asthma living in close proximity to major roadways in Detroit, MI, (particularly near roadways with high diesel traffic) have greater health impacts associated with exposure to air pollutants than those living farther away. A major challenge in such health and exposure studies is the lack of information regarding pollutant exposure characterization. Air quality modeling can provide spatially and temporally varying exposure estimates for examining relationships between traffic-related air pollutants and adverse health outcomes. This paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associated with local variations of emissions and meteorology, were estimated using a combination of the AERMOD and R-LINE dispersion models, local emission source information from the National Emissions Inventory, detailed road network locations and traffic activity, and meteorological data from the Detroit City Airport. The regional background contribution was estimated using a combination of the Community Multiscale Air Quality (CMAQ) model and the Space/Time Ordinary Kriging (STOK) model. To capture the near-road pollutant gradients, refined “mini-grids” of model recep
Fiber Contraction Approaches for Improving CMC Proportional Limit
NASA Technical Reports Server (NTRS)
DiCarlo, James A.; Yun, Hee Mann
1997-01-01
The fact that the service life of ceramic matrix composites (CMC) decreases dramatically for stresses above the CMC proportional limit has triggered a variety of research activities to develop microstructural approaches that can significantly improve this limit. As discussed in a previous report, both local and global approaches exist for hindering the propagation of cracks through the CMC matrix, the physical source for the proportional limit. Local approaches include: (1) minimizing fiber diameter and matrix modulus; (2) maximizing fiber volume fraction, fiber modulus, and matrix toughness; and (3) optimizing fiber-matrix interfacial shear strength; all of which should reduce the stress concentration at the tip of cracks pre existing or created in the matrix during CMC service. Global approaches, as with pre-stressed concrete, center on seeking mechanisms for utilizing the reinforcing fiber to subject the matrix to in-situ compressive stresses which will remain stable during CMC service. Demonstrated CMC examples for the viability of this residual stress approach are based on strain mismatches between the fiber and matrix in their free states, such as, thermal expansion mismatch and creep mismatch. However, these particular mismatch approaches are application limited in that the residual stresses from expansion mismatch are optimum only at low CMC service temperatures and the residual stresses from creep mismatch are typically unidirectional and difficult to implement in complex-shaped CMC.
Exploring the Extreme Universe with the Fermi Gamma-Ray Space Telescope
NASA Technical Reports Server (NTRS)
Thompson, D. J.
2010-01-01
Because high-energy gamma rays are produced by powerful sources, the Fermi Gamma-ray Space Telescope provides a window on extreme conditions in the Universe. Some key observations of the constantly changing gamma-ray sky include: (1) Gamma-rays from pulsars appear to come from a region well above the surface of the neutron star; (2) Multiwavelength studies of blazars show that simple models of jet emission are not always adequate to explain what is seen; (3) Gamma-ray bursts can constrain models of quantum gravity; (4) Cosmic-ray electrons at energies approaching 1 TeV suggest a local source for some of these particles.
Variations in the fine-structure constant constraining gravity theories
NASA Astrophysics Data System (ADS)
Bezerra, V. B.; Cunha, M. S.; Muniz, C. R.; Tahim, M. O.; Vieira, H. S.
2016-08-01
In this paper, we investigate how the fine-structure constant, α, locally varies in the presence of a static and spherically symmetric gravitational source. The procedure consists in calculating the solution and the energy eigenvalues of a massive scalar field around that source, considering the weak-field regime. From this result, we obtain expressions for a spatially variable fine-structure constant by considering suitable modifications in the involved parameters admitting some scenarios of semi-classical and quantum gravities. Constraints on free parameters of the approached theories are calculated from astrophysical observations of the emission spectra of a white dwarf. Such constraints are finally compared with those obtained in the literature.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego
2011-10-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy introduced by the presence of the magnetic field (χ∥ /χ⊥ ~1010 in fusion plasmas). Recently, a novel Lagrangian method has been proposed to solve the local and non-local purely parallel transport equation in general 3D magnetic fields. The approach avoids numerical pollution (in fact, it respects transport barriers -flux surfaces- exactly by construction), is inherently positivity-preserving, and is scalable algorithmically (i.e., work per degree-of-freedom is grid-independent). In this poster, we discuss the extension of the Lagrangian approach to include perpendicular transport and sources. We present an asymptotic-preserving numerical formulation that ensures a consistent numerical discretization temporally and spatially for arbitrary χ∥ /χ⊥ ratios. This is of importance because parallel and perpendicular transport terms in the transport equation may become comparable in regions of the plasma (e.g., at incipient islands), while remaining disparate elsewhere. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry. D. del-Castillo-Negrete, L. Chacón, PRL, 106, 195004 (2011); DPP11 invited talk by del-Castillo-Negrete.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Low-cost, high-density sensor network for urban emission monitoring: BEACO2N
NASA Astrophysics Data System (ADS)
Kim, J.; Shusterman, A.; Lieschke, K.; Newman, C.; Cohen, R. C.
2017-12-01
In urban environments, air quality is spatially and temporally heterogeneous as diverse emission sources create a high degree of variability even at the neighborhood scale. Conventional air quality monitoring relies on continuous measurements with limited spatial resolution or passive sampling with high-density and low temporal resolution. Either approach averages the air quality information over space or time and hinders our attempts to understand emissions, chemistry, and human exposure in the near-field of emission sources. To better capture the true spatio-temporal heterogeneity of urban conditions, we have deployed a low-cost, high-density air quality monitoring network in San Francisco Bay Area distributed at 2km horizontal spacing. The BErkeley Atmospheric CO2 Observation Network (BEACO2N) consists of approximately 50 sensor nodes, measuring CO2, CO, NO, NO2, O3, and aerosol. Here we describe field-based calibration approaches that are consistent with the low-cost strategy of the monitoring network. Observations that allow inference of emission factors and identification of specific local emission sources will also be presented.
Mercury's Crustal Magnetic Field from MESSENGER Data
NASA Astrophysics Data System (ADS)
Plattner, A.; Johnson, C.
2017-12-01
We present a regional spherical-harmonic based crustal magnetic field model for Mercury between latitudes 45° and 70° N, derived from MESSENGER magnetic field data. In addition to contributions from the core dynamo, the bow shock, and the magnetotail, Mercury's magnetic field is also influenced by interactions with the solar wind. The resulting field-aligned currents generate magnetic fields that are typically an order of magnitude stronger at spacecraft altitude than the field from sources within Mercury's crust. These current sources lie within the satellite path and so the resulting magnetic field can not be modeled using potential-field approaches. However, these fields are organized in the local-time frame and their spatial structure differs from that of the smaller-scale crustal field. We account for large-scale magnetic fields in the local-time reference frame by subtracting from the data a low-degree localized vector spherical-harmonic model including curl components fitted at satellite altitude. The residual data exhibit consistent signals across individual satellite tracks in the body fixed reference frame, similar to those obtained via more rudimentary along-track filtering approaches. We fit a regional internal-source spherical-harmonic model to the night-time radial component of the residual data, allowing a maximum spherical-harmonic degree of L = 150. Due to the cross-track spacing of the satellite tracks, spherical-harmonic degrees beyond L = 90 are damped. The strongest signals in the resulting model are in the region around the Caloris Basin and over Suisei Planitia, as observed previously. Regularization imposed in the modeling allows the field to be downward continued to the surface. The strongest surface fields are 30 nT. Furthermore, the regional power spectrum of the model shows a downward dipping slope between spherical-harmonic degrees 40 and 80, hinting that the main component of the crustal field lies deep within the crust.
NASA Astrophysics Data System (ADS)
Landis, Matthew S.; Lewis, Charles W.; Stevens, Robert K.; Keeler, Gerald J.; Dvonch, J. Timothy; Tremblay, Raphael T.
During the fall of 1998, the US Environmental Protection Agency and the Florida Department of Environmental Protection sponsored a 7-day study at the Ft. McHenry tunnel in Baltimore, MD with the objective of obtaining PM 2.5 vehicle source profiles for use in atmospheric mercury source apportionment studies. PM 2.5 emission profiles from gasoline and diesel powered vehicles were developed from analysis of trace elements, polycyclic aromatic hydrocarbons (PAH), and condensed aliphatic hydrocarbons. PM 2.5 samples were collected using commercially available sampling systems and were extracted and analyzed using conventional well-established methods. Both inorganic and organic profiles were sufficiently unique to mathematically discriminate the contributions from each source type using a chemical mass balance source apportionment approach. However, only the organic source profiles provided unique PAH tracers (e.g., fluoranthene, pyrene, and chrysene) for diesel combustion that could be used to identify source contributions generated using multivariate statistical receptor modeling approaches. In addition, the study found significant emission of gaseous elemental mercury (Hg 0), divalent reactive gaseous mercury (RGM), and particulate mercury (Hg(p)) from gasoline but not from diesel powered motor vehicles. Fuel analysis supported the tunnel measurement results showing that total mercury content in all grades of gasoline (284±108 ng L -1) was substantially higher than total mercury content in diesel fuel (62±37 ng L -1) collected contemporaneously at local Baltimore retailers.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias
2018-06-01
Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Frontiers of X-ray research at the Advanced Photon Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehmer, J.J.
1995-12-31
With providential timing, the Advanced Photon Source (APS) at Argonne National Laboratory has begun to produce x-rays during the centennial year of Wilhelm Rongtgen`s discovery of a {open_quotes}new kind of rays.{close_quotes} When complete, this third-generation, 7-GeV positron storage ring will produce nearly one hundred intense x-ray beams, with a major emphasis on the laser-like (highly collimated, locally coherent) beams from undulator sources. This talk will provide an overview of (1) the important properties of the synchrotron radiation to be produced by the APS, (2) the major classes of experimental approaches that use x-rays, and (3) some speculation on the impactsmore » of the APS on the materials, chemical, biological, and environmental sciences.« less
Non-stationary (13)C-metabolic flux ratio analysis.
Hörl, Manuel; Schnidder, Julian; Sauer, Uwe; Zamboni, Nicola
2013-12-01
(13)C-metabolic flux analysis ((13)C-MFA) has become a key method for metabolic engineering and systems biology. In the most common methodology, fluxes are calculated by global isotopomer balancing and iterative fitting to stationary (13)C-labeling data. This approach requires a closed carbon balance, long-lasting metabolic steady state, and the detection of (13)C-patterns in a large number of metabolites. These restrictions mostly reduced the application of (13)C-MFA to the central carbon metabolism of well-studied model organisms grown in minimal media with a single carbon source. Here we introduce non-stationary (13)C-metabolic flux ratio analysis as a novel method for (13)C-MFA to allow estimating local, relative fluxes from ultra-short (13)C-labeling experiments and without the need for global isotopomer balancing. The approach relies on the acquisition of non-stationary (13)C-labeling data exclusively for metabolites in the proximity of a node of converging fluxes and a local parameter estimation with a system of ordinary differential equations. We developed a generalized workflow that takes into account reaction types and the availability of mass spectrometric data on molecular ions or fragments for data processing, modeling, parameter and error estimation. We demonstrated the approach by analyzing three key nodes of converging fluxes in central metabolism of Bacillus subtilis. We obtained flux estimates that are in agreement with published results obtained from steady state experiments, but reduced the duration of the necessary (13)C-labeling experiment to less than a minute. These results show that our strategy enables to formally estimate relative pathway fluxes on extremely short time scale, neglecting cellular carbon balancing. Hence this approach paves the road to targeted (13)C-MFA in dynamic systems with multiple carbon sources and towards rich media. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Revil, A.
2018-04-01
Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.
Fortified Anonymous Communication Protocol for Location Privacy in WSN: A Modular Approach
Abuzneid, Abdel-Shakour; Sobh, Tarek; Faezipour, Miad; Mahmood, Ausif; James, John
2015-01-01
Wireless sensor network (WSN) consists of many hosts called sensors. These sensors can sense a phenomenon (motion, temperature, humidity, average, max, min, etc.) and represent what they sense in a form of data. There are many applications for WSNs including object tracking and monitoring where in most of the cases these objects need protection. In these applications, data privacy itself might not be as important as the privacy of source location. In addition to the source location privacy, sink location privacy should also be provided. Providing an efficient end-to-end privacy solution would be a challenging task to achieve due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood, and safety period. We extend this work to allow for countermeasures against multi-local and global adversaries. We present a network model protected against a sophisticated threat model: passive /active and local/multi-local/global attacks. This work provides a solution for end-to-end anonymity and location privacy as well. We will introduce a framework called fortified anonymous communication (FAC) protocol for WSN. PMID:25763649
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Subsistence Food Production Practices: An Approach to Food Security and Good Health.
Rankoana, Sejabaledi A
2017-10-05
Food security is a prerequisite for health. Availability and accessibility of food in rural areas is mainly achieved through subsistence production in which community members use local practices to produce and preserve food. Subsistence food production ensures self-sufficiency and reduction of poverty and hunger. The main emphasis with the present study is examining subsistence farming and collection of edible plant materials to fulfill dietary requirements, thereby ensuring food security and good health. Data collected from a purposive sample show that subsistence crops produced in the home-gardens and fields, and those collected from the wild, are sources of grain, vegetables and legumes. Sources of grain and legumes are produced in the home-gardens and fields, whereas vegetables sources are mostly collected in the wild and fewer in the home-gardens. These food sources have perceived health potential in child and maternal care of primary health care.
Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.
Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G
2014-12-01
PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.
Adaptive behaviors in multi-agent source localization using passive sensing.
Shaukat, Mansoor; Chitre, Mandar
2016-12-01
In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.
NASA Astrophysics Data System (ADS)
Zimnoch, M.; Jelen, D.; Galkowski, M.; Kuc, T.; Necki, J.; Chmura, L.; Gorczyca, Z.; Jasek, A.; Rozanski, K.
2012-04-01
The European continent, due to high population density and numerous sources of anthropogenic CO2 emissions, plays an important role in the global carbon budget. Nowadays, precise measurements of CO2 mixing ratios performed by both global and regional monitoring networks, combined with appropriate models of carbon cycle, allow quantification of the European input to the global atmospheric CO2 load. However, measurements of CO2 mixing ratios alone cannot provide the information necessary for the apportionment of fossil-fuel related and biogenic contributions to the total CO2 burden of the regional atmosphere. Additional information is required, for instance obtained through measurements of radiocarbon content in atmospheric carbon dioxide. Radiocarbon is a particularly useful tracer for detecting fossil carbon in the atmosphere on different spatial and temporal scales. Regular observations of atmospheric CO2mixing ratios and their isotope compositions have been performed during the period of 2005-2009 at two sites located in central Europe (southern Poland). The sites, only ca. 100 km apart, represent two extreme environments with respect to the extent of anthropogenic pressure: (i) the city of Krakow, representing typical urban environment with numerous sources of anthropogenic CO2, and (ii) remote mountain site Kasprowy Wierch, relatively free of local influences. Regular, quasi-continuous measurements of CO2 mixing ratios have been performed at both sites. In addition, cumulative samples of atmospheric CO2 have been collected (weekly sampling regime for Krakow and monthly for Kasprowy Wierch) to obtain mean carbon isotope signature (14C/12C and 13C/12C ratios) of atmospheric CO2 at both sampling locations. Partitioning of the local atmospheric CO2 load at both locations has been performed using isotope- and mass balance approach. In Krakow, the average fossil-fuel related contribution to the local atmospheric CO2 load was equal to approximately 3.4%. The biogenic component turned out to be of the same magnitude. Both components revealed a distinct seasonality, with the fossil-fuel related component reaching maximum values during winter months and the biogenic component shifted in phase by ca. 6 months. Seasonality of fossil-fuel related CO2 load in the local atmosphere is linked with seasonality of local CO2sources, mostly burning of fossil fuels for heating purposes. Positive values of biogenic component indicate prevalence of the local respiration and biomass burning processes over local photosynthesis. Summer maxima of biogenic CO2 component represent mostly local respiration activity. Direct measurements of soil CO2 fluxes in the Krakow region showed an approximately 10-fold increase of those fluxes during the summer months. Partitioning of the local CO2 budget for Kasprowy Wierch site revealed large differences in the derived components when compared to urban atmosphere of Krakow: the fossil-fuel related component was ca. 5 times lower whereas the biogenic component was negative in summer, pointing to the importance of photosynthetic sink associated with extensive forests in the neighborhood of the station. The isotope- and mass balance approach was also used to derive mean monthly 13C isotope signature of fossil-fuel related CO2 emissions in Krakow. Although the derived δ13CO2 values revealed large variability, they are confined in the range of 13C isotope composition being reported for various sources of CO2 emissions in the city (burning of coal and oil, burning of methane gas, traffic).
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-07-01
The Conference on Alternatives for Pollution Control from Coal-Fired Emission Sources presented cost-effective approaches for pollution control of low emission sources (LES). It also identified policies and strategies for implementation of pollution control measures at the local level. Plzen, Czech Republic, was chosen as the conference site to show participants first hand the LES problems facing Eastern Europe today. Collectively, these Proceedings contain clear reports on: (a) methods for evaluating the cost effectiveness of alternative approaches to control pollution from small coal-fired boilers and furnaces; (b) cost-effective technologies for controlling pollution from coal-fired boilers and furnaces; (c) case studies ofmore » assessment of cost effective pollution control measures for selected cities in eastern Europe; and (d) approaches for actually implementing pollution control measures in cities in Eastern Europe. It is intended that the eastern/central European reader will find in these Proceedings useful measures that can be applied to control emissions and clean the air in his city or region. The conference was sponsored by the United States Agency for International Development (AID), the United States Department of Energy (DOE), and the Czech Ministry of Industry and Trade. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less
Science on a Shoestring: Building Nursing Knowledge With Limited Funding.
Conn, Vicki S; Topp, Robert; Dunn, Susan L; Hopp, Lisa; Jadack, Rosemary; Jansen, Debra A; Jefferson, Urmeka T; Moch, Susan Diemert
2015-10-01
Building the science for nursing practice has never been more important. However, shrunken federal and state research budgets mean that investigators must find alternative sources of financial support and develop projects that are less costly to carry out. New investigators often build beginning programs of research with limited funding. This article provides an overview of some cost-effective research approaches and gives suggestions for finding other sources of funding. Examples of more cost-effective research approaches include adding complementary questions to existing funded research projects; conducting primary analysis of electronic patient records and social media content; conducting secondary analysis of data from completed studies; reviewing and synthesizing previously completed research; implementing community-based participatory research; participating in collaborative research efforts such as inter-campus team research, practice-based research networks (PBRNs), and involving undergraduate and doctoral students in research efforts. Instead of relying on funding from the National Institutes of Health (NIH) and other government agencies, nurse researchers may be able to find support for research from local sources such as businesses, organizations, or clinical agencies. Investigators will increasingly have to rely on these and other creative approaches to fund and implement their research programs if granting agency budgets do not significantly expand. © The Author(s) 2015.
A manual to identify sources of fluvial sediment
Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph
2016-01-01
Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential sediment sources early in the design of the sediment budget will help later in deciding which tools are necessary to monitor erosion and/or deposition at these sources. Tools can range from rapid inventories to estimate the sediment budget or quantifying sediment erosion, deposition, and export through more rigorous field monitoring. In either approach, data are gathered and erosion and deposition calculations are determined and compared to the sediment export with a description of the error uncertainty. Findings are presented to local stakeholders and management officials.Sediment fingerprinting is a technique that apportions the sources of fine-grained sediment in a watershed using tracers or fingerprints. Due to different geologic and anthropogenic histories, the chemical and physical properties of sediment in a watershed may vary and often represent a unique signature (or fingerprint) for each source within the watershed. Fluvial sediment samples (the target sediment) are also collected and exhibit a composite of the source properties that can be apportioned through various statistical techniques. Using an unmixing-model and error analysis, the final apportioned sediment is determined.
González-Macías, C; Sánchez-Reyna, G; Salazar-Coria, L; Schifter, I
2014-01-01
During the last two decades, sediments collected in different sources of water bodies of the Tehuantepec Basin, located in the southeast of the Mexican Pacific Coast, showed that concentrations of heavy metals may pose a risk to the environment and human health. The extractable organic matter, geoaccumulation index, and enrichment factors were quantified for arsenic, cadmium, copper, chromium, nickel, lead, vanadium, zinc, and the fine-grained sediment fraction. The non-parametric SiZer method was applied to assess the statistical significance of the reconstructed metal variation along time. This inference method appears to be particularly natural and well suited to temperature and other environmental reconstructions. In this approach, a collection of smooth of the reconstructed metal concentrations is considered simultaneously, and inferences about the significance of the metal trends can be made with respect to time. Hence, the database represents a consolidated set of available and validated water and sediment data of an urban industrialized area, which is very useful as case study site. The positive matrix factorization approach was used in identification and source apportionment of the anthropogenic heavy metals in the sediments. Regionally, metals and organic matter are depleted relative to crustal abundance in a range of 45-55 %, while there is an inorganic enrichment from lithogenous/anthropogenic sources of around 40 %. Only extractable organic matter, Pb, As, and Cd can be related with non-crustal sources, suggesting that additional input cannot be explained by local runoff or erosion processes.
Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S
2017-09-01
With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
Snodgrass, Jeffrey G; Lacy, Michael G; Upadhyay, Chakrapani
2017-08-01
We present a perspective to analyze mental health without either a) imposing Western illness categories or b) adopting local or "native" categories of mental distress. Our approach takes as axiomatic only that locals within any culture share a cognitive and verbal lexicon of salient positive and negative emotional experiences, which an appropriate and repeatable set of ethnographic procedures can elicit. Our approach is provisionally agnostic with respect to either Western or native nosological categories, and instead focuses on persons' relative frequency of experiencing emotions. Putting this perspective into practice in India, our ethnographic fieldwork (2006-2014) and survey analysis (N = 219) resulted in a 40-item Positive and Negative Affect Scale (PANAS), which we used to assess the mental well-being of Indigenous persons (the tribal Sahariya) in the Indian states of Rajasthan and Madhya Pradesh. Generated via standard cognitive anthropological procedures that can be replicated elsewhere, measures such as this possess features of psychiatric scales favored by leaders in global mental health initiatives. Though not capturing locally named distress syndromes, our scale is nonetheless sensitive to local emotional experiences, frames of meaning, and "idioms of distress." By sharing traits of both global and also locally-derived diagnoses, approaches like ours can help identify synergies between them. For example, employing data reduction techniques such as factor analysis-where diagnostic and screening categories emerge inductively ex post facto from emotional symptom clusters, rather than being deduced or assigned a priori by either global mental health experts or locals themselves-reveals hidden overlaps between local wellness idioms and global ones. Practically speaking, our perspective, which assesses both emotional frailty and also potential sources of emotional resilience and balance, while eschewing all named illness categories, can be deployed in mental health initiatives in ways that minimize stigma and increase both the acceptability and validity of assessment instruments. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Wei
The passive ocean acoustic waveguide remote sensing (POAWRS) technology is capable of monitoring a large variety of underwater sound sources over instantaneous wide areas spanning continental-shelf scale regions. POAWRS uses a large-aperture densely-sampled coherent hydrophone array to significantly enhance the signal-to-noise ratio via beamforming, enabling detection of sound sources roughly two-orders of magnitude more distant in range than that possible with a single hydrophone. The sound sources detected by POAWRS include ocean biology, geophysical processes, and man-made activities. POAWRS provides detection, bearing-time estimation, localization, and classification of underwater sound sources. The volume of underwater sounds detected by POAWRS is immense, typically exceeding a million unique signal detections per day, in the 10-4000 Hz frequency range, making it a tremendously challenging task to distinguish and categorize the various sound sources present in a given region. Here we develop various approaches for characterizing and clustering the signal detections for various subsets of data acquired using the POAWRS technology. The approaches include pitch tracking of the dominant signal detections, time-frequency feature extraction, clustering and categorization methods. These approaches are essential for automatic processing and enhancing the efficiency and accuracy of POAWRS data analysis. The results of the signal detection, clustering and classification analysis are required for further POAWRS processing, including localization and tracking of a large number of oceanic sound sources. Here the POAWRS detection, localization and clustering approaches are applied to analyze and elucidate the vocalization behavior of humpback, sperm and fin whales in the New England continental shelf and slope, including the Gulf of Maine from data acquired using coherent hydrophone arrays. The POAWRS technology can also be applied for monitoring ocean vehicles. Here the approach is calibrated by application to known ships present in the Gulf of Maine and in the Norwegian Sea from their underwater sounds received using a coherent hydrophone array. The vocalization behavior of humpback whales was monitored over vast areas of the Gulf of Maine using the POAWRS technique over multiple diel cycles in Fall 2006. The humpback vocalizations, received at a rate of roughly 1800+/-1100 calls per day, comprised of both song and non-song. The song vocalizations, composed of highly structured and repeatable set of phrases, are characterized by inter-pulse intervals of 3.5 +/- 1.8 s. Songs were detected throughout the diel cycle, occuring roughly 40% during the day and 60% during the night. The humpback non-song vocalizations, dominated by shorter duration (≤3 s) downsweep and bow-shaped moans, as well as a small fraction of longer duration (˜5 s) cries, have significantly larger mean and more variable inter-pulse intervals of 14.2 +/- 11 s. The non-song vocalizations were detected at night with negligible detections during the day, implying they probably function as nighttime communication signals. The humpback song and non-song vocalizations are separately localized using the moving array triangulation and array invariant techniques. The humpback song and non-song moan calls are both consistently localized to a dense area on northeastern Georges Bank and a less dense region extended from Franklin Basin to the Great South Channel. Humpback cries occur exclusively on northeastern Georges Bank and during nights with coincident dense Atlantic herring shoaling populations, implying the cries are feeding-related. Sperm whales in the New England continental shelf and slope were passively localized and classified from their vocalizations received using a single low-frequency (<2500 Hz) densely-sampled horizontal coherent hydrophone array deployed in Spring 2013 in Gulf of Maine. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. Multiple concurrently vocalizing sperm whales, in the far-field of the horizontal receiver array, were distinguished and classified based on their horizontal spatial locations and the inter-pulse intervals of their vocalized click signals. We provide detailed analysis of over 15,000 fin whale 20 Hz vocalizations received on Oct 1-3, 2006 in the Gulf of Maine. These vocalizations are separated into 16 clusters following the clustering approaches. Seven of these types are prominent, each acounting for between 8% to 16% and together comprise roughly 85% of all the analyzed vocalizations. The 7 prominent clusters are each more abundant during nighttime hours by a factor of roughly 2.5 times than that of the daytime. The diel-spatial correlation of the 7 prominent clusters to the simultaneously observed densities of their fish prey, the Atlantic herring in the Gulf of Maine, is provided which implies that the factor of roughly 2.5 increase in call rate during night-time hours can be attributed to increased fish-feeding activities. (Abstract shortened by ProQuest.).
Packaged water: optimizing local processes for sustainable water delivery in developing nations
2011-01-01
With so much global attention and commitment towards making the Water and Sanitation targets of the Millennium Development Goals (MDGs) a reality, available figures seem to speak on the contrary as they reveal a large disparity between the expected and what currently obtains especially in developing countries. As studies have shown that the standard industrialized world model for delivery of safe drinking water technology may not be affordable in much of the developing world, packaged water is suggested as a low cost, readily available alternative water provision that could help bridge the gap. Despite the established roles that this drinking water source plays in developing nations, its importance is however significantly underestimated, and the source considered unimproved going by 'international standards'. Rather than simply disqualifying water from this source, focus should be on identifying means of improvement. The need for intervening global communities and developmental organizations to learn from and build on the local processes that already operate in the developing world is also emphasized. Identifying packaged water case studies of some developing nations, the implication of a tenacious focus on imported policies, standards and regulatory approaches on drinking water access for residents of the developing world is also discussed. PMID:21801391
Zohar, I; Bookman, R; Levin, N; de Stigter, H; Teutsch, N
2014-12-02
Pollution history of Pb and other trace metals was reconstructed for the first time for the Eastern Mediterranean, from a small urban winter pond (Dora, Netanya), located at the densely populated coastal plain of Israel. An integrated approach including geochemical, sedimentological, and historical analyses was employed to study sediments from the center of the pond. Profiles of metal concentrations (Pb, Zn, V, Ni, Cu, Cr, Co, Cd, and Hg) and Pb isotopic composition denote two main eras of pre- and post-19th century. The deeper sediment is characterized by low concentrations and relatively constant 206Pb/207Pb (around 1.20), similar to natural Pb sources, with slight indications of ancient anthropogenic activity. The upper sediment displays an upward increase in trace metal concentrations, with the highest enrichment factor for Pb (18.4). Lead fluxes and isotopic composition point to national/regional petrol-Pb emissions as the major contributor to Pb contamination, overwhelming other potential local and transboundary sources. Traffic-related metals are correlated with Pb, emphasizing the polluting inputs of traffic. The Hg profile, however, implies global pollution rather than local sources.
Monitoring fossil fuel sources of methane in Australia
NASA Astrophysics Data System (ADS)
Loh, Zoe; Etheridge, David; Luhar, Ashok; Hibberd, Mark; Thatcher, Marcus; Noonan, Julie; Thornton, David; Spencer, Darren; Gregory, Rebecca; Jenkins, Charles; Zegelin, Steve; Leuning, Ray; Day, Stuart; Barrett, Damian
2017-04-01
CSIRO has been active in identifying and quantifying methane emissions from a range of fossil fuel sources in Australia over the past decade. We present here a history of the development of our work in this domain. While we have principally focused on optimising the use of long term, fixed location, high precision monitoring, paired with both forward and inverse modelling techniques suitable either local or regional scales, we have also incorporated mobile ground surveys and flux calculations from plumes in some contexts. We initially developed leak detection methodologies for geological carbon storage at a local scale using a Bayesian probabilistic approach coupled to a backward Lagrangian particle dispersion model (Luhar et al. JGR, 2014), and single point monitoring with sector analysis (Etheridge et al. In prep.) We have since expanded our modelling techniques to regional scales using both forward and inverse approaches to constrain methane emissions from coal mining and coal seam gas (CSG) production. The Surat Basin (Queensland, Australia) is a region of rapidly expanding CSG production, in which we have established a pair of carefully located, well-intercalibrated monitoring stations. These data sets provide an almost continuous record of (i) background air arriving at the Surat Basin, and (ii) the signal resulting from methane emissions within the Basin, i.e. total downwind methane concentration (comprising emissions including natural geological seeps, agricultural and biogenic sources and fugitive emissions from CSG production) minus background or upwind concentration. We will present our latest results on monitoring from the Surat Basin and their application to estimating methane emissions.
Directional Emission from Dielectric Leaky-Wave Nanoantennas
NASA Astrophysics Data System (ADS)
Peter, Manuel; Hildebrandt, Andre; Schlickriede, Christian; Gharib, Kimia; Zentgraf, Thomas; Förstner, Jens; Linden, Stefan
2017-07-01
An important source of innovation in nanophotonics is the idea to scale down known radio wave technologies to the optical regime. One thoroughly investigated example of this approach are metallic nanoantennas which employ plasmonic resonances to couple localized emitters to selected far-field modes. While metals can be treated as perfect conductors in the microwave regime, their response becomes Drude-like at optical frequencies. Thus, plasmonic nanoantennas are inherently lossy. Moreover, their resonant nature requires precise control of the antenna geometry. A promising way to circumvent these problems is the use of broadband nanoantennas made from low-loss dielectric materials. Here, we report on highly directional emission from active dielectric leaky-wave nanoantennas made of Hafnium dioxide. Colloidal semiconductor quantum dots deposited in the nanoantenna feed gap serve as a local light source. The emission patterns of active nanoantennas with different sizes are measured by Fourier imaging. We find for all antenna sizes a highly directional emission, underlining the broadband operation of our design.
NASA Astrophysics Data System (ADS)
Ataeva, G.; Gitterman, Y.; Shapira, A.
2017-01-01
This study analyzes and compares the P- and S-wave displacement spectra from local earthquakes and explosions of similar magnitudes. We propose a new approach to discrimination between low-magnitude shallow earthquakes and explosions by using ratios of P- to S-wave corner frequencies as a criterion. We have explored 2430 digital records of the Israeli Seismic Network (ISN) from 456 local events (226 earthquakes, 230 quarry blasts, and a few underwater explosions) of magnitudes Md = 1.4-3.4, which occurred at distances up to 250 km during 2001-2013 years. P-wave and S-wave displacement spectra were computed for all events following Brune's source model of earthquakes (1970, 1971) and applying the distance correction coefficients (Shapira and Hofstetter, Teconophysics 217:217-226, 1993; Ataeva G, Shapira A, Hofstetter A, J Seismol 19:389-401, 2015), The corner frequencies and moment magnitudes were determined using multiple stations for each event, and then the comparative analysis was performed.
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Methodological challenges involved in compiling the Nahua pharmacopeia.
De Vos, Paula
2017-06-01
Recent work in the history of science has questioned the Eurocentric nature of the field and sought to include a more global approach that would serve to displace center-periphery models in favor of approaches that take seriously local knowledge production. Historians of Iberian colonial science have taken up this approach, which involves reliance on indigenous knowledge traditions of the Americas. These traditions present a number of challenges to modern researchers, including availability and reliability of source material, issues of translation and identification, and lack of systematization. This essay explores the challenges that emerged in the author's attempt to compile a pre-contact Nahua pharmacopeia, the reasons for these challenges, and the ways they may - or may not - be overcome.
Papadelis, Christos; Tamilia, Eleonora; Stufflebeam, Steven; Grant, Patricia E.; Madsen, Joseph R.; Pearl, Phillip L.; Tanaka, Naoaki
2016-01-01
Crucial to the success of epilepsy surgery is the availability of a robust biomarker that identifies the Epileptogenic Zone (EZ). High Frequency Oscillations (HFOs) have emerged as potential presurgical biomarkers for the identification of the EZ in addition to Interictal Epileptiform Discharges (IEDs) and ictal activity. Although they are promising to localize the EZ, they are not yet suited for the diagnosis or monitoring of epilepsy in clinical practice. Primary barriers remain: the lack of a formal and global definition for HFOs; the consequent heterogeneity of methodological approaches used for their study; and the practical difficulties to detect and localize them noninvasively from scalp recordings. Here, we present a methodology for the recording, detection, and localization of interictal HFOs from pediatric patients with refractory epilepsy. We report representative data of HFOs detected noninvasively from interictal scalp EEG and MEG from two children undergoing surgery. The underlying generators of HFOs were localized by solving the inverse problem and their localization was compared to the Seizure Onset Zone (SOZ) as this was defined by the epileptologists. For both patients, Interictal Epileptogenic Discharges (IEDs) and HFOs were localized with source imaging at concordant locations. For one patient, intracranial EEG (iEEG) data were also available. For this patient, we found that the HFOs localization was concordant between noninvasive and invasive methods. The comparison of iEEG with the results from scalp recordings served to validate these findings. To our best knowledge, this is the first study that presents the source localization of scalp HFOs from simultaneous EEG and MEG recordings comparing the results with invasive recordings. These findings suggest that HFOs can be reliably detected and localized noninvasively with scalp EEG and MEG. We conclude that the noninvasive localization of interictal HFOs could significantly improve the presurgical evaluation for pediatric patients with epilepsy. PMID:28060325
2006-12-01
subsystem that drives the active materials to achieve the desired shape changes. As opposed to fixed wing structures in which the aerodynamic and...structures and aerodynamics occur in conjunction with the active material and electronic subsystem interactions that involve transfer of energy from a source...which the aerodynamic and structure integration for the entire wing is the most important interaction mechanism, in the case of a morphing wing
Fisher, Rohan; Lassa, Jonatan
2017-04-18
Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and responsive nature of the applications described has the potential to allow complex environmental social and political considerations to be incorporated and visualised. Through supporting evidence-based planning the innovative modelling practices described have the potential to help local health and emergency response planning in the developing world.
Place in Pacific Islands Climate Education
NASA Astrophysics Data System (ADS)
Barros, C.; Koh, M. W.
2015-12-01
Understanding place, including both the environment and its people, is essential to understanding our climate, climate change, and its impacts. For us to develop a sense of our place, we need to engage in multiple ways of learning: observation, experimentation, and opportunities to apply new knowledge (Orr, 1992). This approach allows us to access different sources of knowledge and then create local solutions for local issues. It is especially powerful when we rely on experts and elders in our own community along with information from the global community.The Pacific islands Climate Education Partnership (PCEP) is a collaboration of partners—school systems, nongovernmental organizations, and government agencies—working to support learning and teaching about climate in the Pacific. Since 2009, PCEP partners have been working together to develop and implement classroom resources, curriculum standards, and teacher professional learning opportunities in which learners approach climate change and its impacts first through the lens of their own place. Such an approach to putting place central to teaching and learning about climate requires partnership and opportunities for learners to explore solutions for and with their communities. In this presentation, we will share the work unfolding in the Republic of the Marshall Islands (RMI) as one example of PCEP's approach to place-based climate education. Three weeklong K-12 teacher professional learning workshops took place during June-July 2015 in Majuro, RMI on learning gardens, climate science, and project-based learning. Each workshop was co-taught with local partners and supports educators in teaching climate-related curriculum standards through tasks that can foster sense of place through observation, experimentation, and application of new knowledge. Additionally, we will also share PCEP's next steps in place-based climate education, specifically around emerging conversations about the importance of highlighting stories of place to generate local solutions for local issues, as well as further global awareness about climate change impacts in the Pacific.
Decomposing intraday dependence in currency markets: evidence from the AUD/USD spot market
NASA Astrophysics Data System (ADS)
Batten, Jonathan A.; Ellis, Craig A.; Hogan, Warren P.
2005-07-01
The local Hurst exponent, a measure employed to detect the presence of dependence in a time series, may also be used to investigate the source of intraday variation observed in the returns in foreign exchange markets. Given that changes in the local Hurst exponent may be due to either a time-varying range, or standard deviation, or both of these simultaneously, values for the range, standard deviation and local Hurst exponent are recorded and analyzed separately. To illustrate this approach, a high-frequency data set of the spot Australian dollar/US dollar provides evidence of the returns distribution across the 24-hour trading ‘day’, with time-varying dependence and volatility clearly aligning with the opening and closing of markets. This variation is attributed to the effects of liquidity and the price-discovery actions of dealers.
Emissions & Measurements - Black Carbon | Science ...
Emissions and Measurement (EM) research activities performed within the National Risk Management Research Lab NRMRL) of EPA's Office of Research and Development (ORD) support measurement and laboratory analysis approaches to accurately characterize source emissions, and near source concentrations of air pollutants. They also support integrated Agency research programs (e.g., source to health outcomes) and the development of databases and inventories that assist Federal, state, and local air quality managers and industry implement and comply with air pollution standards. EM research underway in NRMRL supports the Agency's efforts to accurately characterize, analyze, measure and manage sources of air pollution. This pamphlet focuses on the EM research that NRMRL researchers conduct related to black carbon (BC). Black Carbon is a pollutant of concern to EPA due to its potential impact on human health and climate change. There are extensive uncertainties in emissions of BC from stationary and mobile sources. Emissions and Measurement (EM) research activities performed within the National Risk Management Research Lab NRMRL) of EPA's Office of Research and Development (ORD)
Distributed single source coding with side information
NASA Astrophysics Data System (ADS)
Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.
2004-01-01
In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
NASA Astrophysics Data System (ADS)
Eftekhari, T.; Berger, E.; Williams, P. K. G.; Blanchard, P. K.
2018-06-01
The discovery of a repeating fast radio burst (FRB) has led to the first precise localization, an association with a dwarf galaxy, and the identification of a coincident persistent radio source. However, further localizations are required to determine the nature of FRBs, the sources powering them, and the possibility of multiple populations. Here we investigate the use of associated persistent radio sources to establish FRB counterparts, taking into account the localization area and the source flux density. Due to the lower areal number density of radio sources compared to faint optical sources, robust associations can be achieved for less precise localizations as compared to direct optical host galaxy associations. For generally larger localizations that preclude robust associations, the number of candidate hosts can be reduced based on the ratio of radio-to-optical brightness. We find that confident associations with sources having a flux density of ∼0.01–1 mJy, comparable to the luminosity of the persistent source associated with FRB 121102 over the redshift range z ≈ 0.1–1, require FRB localizations of ≲20″. We demonstrate that even in the absence of a robust association, constraints can be placed on the luminosity of an associated radio source as a function of localization and dispersion measure (DM). For DM ≈1000 pc cm‑3, an upper limit comparable to the luminosity of the FRB 121102 persistent source can be placed if the localization is ≲10″. We apply our analysis to the case of the ASKAP FRB 170107, using optical and radio observations of the localization region. We identify two candidate hosts based on a radio-to-optical brightness ratio of ≳100. We find that if one of these is indeed associated with FRB 170107, the resulting radio luminosity (1029‑ 4 × 1030 erg s‑1 Hz‑1, as constrained from the DM value) is comparable to the luminosity of the FRB 121102 persistent source.
Near-Field Noise Source Localization in the Presence of Interference
NASA Astrophysics Data System (ADS)
Liang, Guolong; Han, Bo
In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.
Do Local Contributions Affect the Efficacy of Public Primary Schools?
ERIC Educational Resources Information Center
Jimenez, Emmanuel; Paqueo, Vicente
1996-01-01
Uses cost, financial sources, and student achievement data from Philippine primary schools (financed primarily from central sources) to discover if financial decentralization leads to more efficient schools. Schools that rely more heavily on local sources (contributions from local school boards, municipal government, parent-teacher associations,…
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
A conceptual model of the automated credibility assessment of the volunteered geographic information
NASA Astrophysics Data System (ADS)
Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.
2014-02-01
The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.
TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Grady, K; Davis, S; Seuntjens, J
Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
Source apportionment of Pb pollution in saltmarsh sediments from southwest England
NASA Astrophysics Data System (ADS)
Iurian, Andra-Rada; Millward, Geoffrey; Taylor, Alex; Marshall, William; Rodríguez, Javier; Gil Ibarguchi, José Ignacio; Blake, William H.
2017-04-01
The local availability of metal resources played a crucial role in Britain's development during the industrial revolution, but centuries of mining within Cornwall and Devon (UK) have left a legacy of contamination in river basin and estuary sediments. Improved knowledge of historical heavy metal sources, emissions and pathways will result in a better understanding of the contemporary pollution conditions and a better protection of the environment from legacy contaminants. Our study aims to trace historical sources of Pb pollution in the area of east Cornwall and west Devon, UK, using a multi proxy approach for contaminants stored in saltmarsh sediment columns from 3 systems characterized by different contamination patterns. Source apportionment investigations included the determination of Pb concentration and Pb isotopic composition (204Pb, 206Pb, 207Pb, and 208Pb) for selected down-core sediment samples, and for local ore and parent rock materials. General trends in pollutant loading (e.g. Pb) could be identified, with maximum inputs occurring in the middle of the 19th century and decreasing towards the present day, while an increase in the catchment disturbance was apparent for the last decades. The isotopic ratios of Pb further indicate that sediments with higher Pb content have a less radiogenic signature, these particular inputs being derived from Pb mining and smelting sources in the catchment area. Acknowledgements: Andra-Rada Iurian acknowledges the support of a Marie Curie Fellowship (H2020-MSCA-IF-2014, Grant Agreement number: 658863) within the Horizon 2020.
NASA Technical Reports Server (NTRS)
Taramelli, A.; Pasqui, M.; Barbour, J.; Kirschbaum, D.; Bottai, L.; Busillo, C.; Calastrini, F.; Guarnieri, F.; Small, C.
2013-01-01
The aim of this research is to provide a detailed characterization of spatial patterns and temporal trends in the regional and local dust source areas within the desert of the Alashan Prefecture (Inner Mongolia, China). This problem was approached through multi-scale remote sensing analysis of vegetation changes. The primary requirements for this regional analysis are high spatial and spectral resolution data, accurate spectral calibration and good temporal resolution with a suitable temporal baseline. Landsat analysis and field validation along with the low spatial resolution classifications from MODIS and AVHRR are combined to provide a reliable characterization of the different potential dust-producing sources. The representation of intra-annual and inter-annual Normalized Difference Vegetation Index (NDVI) trend to assess land cover discrimination for mapping potential dust source using MODIS and AVHRR at larger scale is enhanced by Landsat Spectral Mixing Analysis (SMA). The combined methodology is to determine the extent to which Landsat can distinguish important soils types in order to better understand how soil reflectance behaves at seasonal and inter-annual timescales. As a final result mapping soil surface properties using SMA is representative of responses of different land and soil cover previously identified by NDVI trend. The results could be used in dust emission models even if they are not reflecting aggregate formation, soil stability or particle coatings showing to be critical for accurately represent dust source over different regional and local emitting areas.
How to Manual: How to Update and Enhance Your Local Source Water Protection Assessments
Describes opportunities for improving source water assessments performed under the Safe Drinking Water Act 1453. It includes: local delineations, potential contaminant source inventories, and susceptibility determinations of source water assessment.
Climate-Smart Seedlot Selection Tool: Reforestation and Restoration for the 21st Century
NASA Astrophysics Data System (ADS)
Stevenson-Molnar, N.; Howe, G.; St Clair, B.; Bachelet, D. M.; Ward, B. C.
2017-12-01
Local populations of trees are generally adapted to their local climates. Historically, this has meant that local seed zones based on geography and elevation have been used to guide restoration and reforestation. In the face of climate change, seeds from local sources will likely be subjected to climates significantly different from those to which they are currently adapted. The Seedlot Selection Tool (SST) offers a new approach for matching seed sources with planting sites based on future climate scenarios. The SST is a mapping program designed for forest managers and researchers. Users can use the tool to to find seedlots for a given planting site, or to find potential planting sites for a given seedlot. Users select a location (seedlot or planting site), climate scenarios (a climate to which seeds are adapted, and a current or future climate scenario), climate variables, and transfer limits (the maximum climatic distance that is considered a suitable match). Transfer limits are provided by the user, or derived from the range of values within a geographically defined seed zone. The tool calculates scores across the landscape based on an area's similarity, in a multivariate climate space, to the input. Users can explore results on an interactive map, and export PDF and PowerPoint reports, including a map of the results along with the inputs used. Planned future improvements include support for non-forest use cases and ability to download results as GeoTIFF data. The Seedlot Selection Tool and its source code are available online at https://seedlotselectiontool.org. It is co-developed by the United States Forest Service, Oregon State University, and the Conservation Biology Institute.
The United Nations development programme initiative for sustainable energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurry, S.
1997-12-01
Energy is central to current concerns about sustainable human development, affecting economic and social development; economic growth, the local, national, regional, and global environment; the global climate; a host of social concerns, including poverty, population, and health, the balance of payments, and the prospects for peace. Energy is not an end in itself, but rather the means to achieve the goals of sustainable human development. The energy systems of most developing countries are in serious crisis involving insufficient levels of energy services, environmental degradation, inequity, poor technical and financial performance, and capital scarcity. Approximately 2.5 billion people in the developingmore » countries have little access to commercial energy supplies. Yet the global demand for energy continues to grow: total primary energy is projected to grow from 378 exajoules (EJ) per year in 1990 to 571 EJ in 2020, and 832 EJ in 2050. If this increase occurs using conventional approaches and energy sources, already serious local (e.g., indoor and urban air pollution), regional (eg., acidification and land degradation), and global (e.g., climate change) environmental problems will be critically aggravated. There is likely to be inadequate capital available for the needed investments in conventional energy sources. Current approaches to energy are thus not sustainable and will, in fact, make energy a barrier to socio-economic development. What is needed now is a new approach in which energy becomes an instrument for sustainable development. The two major components of a sustainable energy strategy are (1) more efficient energy use, especially at the point of end-use, and (2) increased use of renewable sources of energy. The UNDP Initiative for Sustainable Energy (UNISE) is designed to harness opportunities in these areas to build upon UNDP`s existing energy activities to help move the world toward a more sustainable energy strategy by helping program countries.« less
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Local public health agency funding: money begets money.
Bernet, Patrick Michael
2007-01-01
Local public health agencies are funded federal, state, and local revenue sources. There is a common belief that increases from one source will be offset by decreases in others, as when a local agency might decide it must increase taxes in response to lowered federal or state funding. This study tests this belief through a cross-sectional study using data from Missouri local public health agencies, and finds, instead, that money begets money. Local agencies that receive more from federal and state sources also raise more at the local level. Given the particular effectiveness of local funding in improving agency performance, these findings that nonlocal revenues are amplified at the local level, help make the case for higher public health funding from federal and state levels.
Multi-source energy harvester to power sensing hardware on rotating structures
NASA Astrophysics Data System (ADS)
Schlichting, Alexander; Ouellette, Scott; Carlson, Clinton; Farinholt, Kevin M.; Park, Gyuhae; Farrar, Charles R.
2010-04-01
The U.S. Department of Energy (DOE) proposes to meet 20% of the nation's energy needs through wind power by the year 2030. To accomplish this goal, the industry will need to produce larger (>100m diameter) turbines to increase efficiency and maximize energy production. It will be imperative to instrument the large composite structures with onboard sensing to provide structural health monitoring capabilities to understand the global response and integrity of these systems as they age. A critical component in the deployment of such a system will be a robust power source that can operate for the lifespan of the wind turbine. In this paper we consider the use of discrete, localized power sources that derive energy from the ambient (solar, thermal) or operational (kinetic) environment. This approach will rely on a multi-source configuration that scavenges energy from photovoltaic and piezoelectric transducers. Each harvester is first characterized individually in the laboratory and then they are combined through a multi-source power conditioner that is designed to combine the output of each harvester in series to power a small wireless sensor node that has active-sensing capabilities. The advantages/disadvantages of each approach are discussed, along with the proposed design for a field ready energy harvester that will be deployed on a small-scale 19.8m diameter wind turbine.
Contribution of regional-scale fire events to ozone and PM2.5 ...
Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas
NASA Astrophysics Data System (ADS)
Mackay, E.; Beven, K.; Brewer, P.; M, Haygarth, P.; Macklin, M.; Marshall, K.; Quinn, P.; Stutter, M.; Thomas, N.; Wilkinson, M.
2012-04-01
Public participation in the development of flood risk management and river basin management plans are explicit components of both the Water Framework and Floods Directives. At the local level, involving communities in land and water management has been found to (i) aid better environmental decision making, (ii) enhance social, economic and environmental benefits, and (iii) increase a sense of ownership. Facilitating the access and exchange of information on the local environment is an important part of this new approach to the land and water management process, which also includes local community stakeholders in decisions about the design and content of the information provided. As part of the Natural Environment Research Council's pilot Environment Virtual Observatory (EVO), the Local Level group are engaging with local community stakeholders in three different catchments in the UK (the rivers Eden, Tarland and Dyfi) to start the process of developing prototype visualisation tools to address the specific land and water management issues identified in each area. Through this local collaboration, we will provide novel visualisation tools through which to communicate complex catchment science outcomes and bring together different sources of environmental data in ways that better meet end-user needs as well as facilitate a far broader participatory approach in environmental decision making. The Local Landscape Visualisation Tools are being evolved iteratively during the project to reflect the needs, interests and capabilities of a wide range of stakeholders. The tools will use the latest concepts and technologies to communicate with and provide opportunities for the provision and exchange of information between the public, government agencies and scientists. This local toolkit will reside within a wider EVO platform that will include national datasets, models and state of the art cloud computer systems. As such, local stakeholder groups are assisting the EVO's development and participating in local decision making alongside policy makers, government agencies and scientists.
Perceptual interaction of local motion signals
Nitzany, Eyal I.; Loe, Maren E.; Palmer, Stephanie E.; Victor, Jonathan D.
2016-01-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them. PMID:27902829
Perceptual interaction of local motion signals.
Nitzany, Eyal I; Loe, Maren E; Palmer, Stephanie E; Victor, Jonathan D
2016-11-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them.
Investigating local sustainable environmental perspectives of Kenyan community members and teachers
NASA Astrophysics Data System (ADS)
Quigley, Cassie F.; Dogbey, James; Che, S. Megan; Hallo, Jeffrey
2015-09-01
Efforts to conserve and preserve the environment in developing or marginalized locales frequently involve a one-way transfer of knowledge and materials from a source in a more developed location. This situation often degenerates into a short-term donor project which risks little to no long-term impacts on local or indigenous relationships with the environment. This research study with educators in Narok, Kenya investigates the current perspectives of local key stakeholders on the environment and sustainability with the purpose of sharing these understandings among local groups to generate a locally constructed meaning of environmental conservation and sustainability. It is the researchers' aim that through locally constructed meanings of environmental hazards and conservation, the Maasai community will empower themselves to transform their relationship with their environment and begin to construct and enact sustainable alternatives to destructive environmental practices. The approach used in this study is a qualitative study of representative stakeholders' environmental perspectives called photovoice. Two major themes emerged during the data analysis: How do we co-habit? and How do we modernize? This community demonstrated a complex understandings including navigate traditional practices, made connections to a larger system, and describing positive ways in which humans influence our environment.
Bigdely-Shamlo, Nima; Mullen, Tim; Kreutz-Delgado, Kenneth; Makeig, Scott
2013-01-01
A crucial question for the analysis of multi-subject and/or multi-session electroencephalographic (EEG) data is how to combine information across multiple recordings from different subjects and/or sessions, each associated with its own set of source processes and scalp projections. Here we introduce a novel statistical method for characterizing the spatial consistency of EEG dynamics across a set of data records. Measure Projection Analysis (MPA) first finds voxels in a common template brain space at which a given dynamic measure is consistent across nearby source locations, then computes local-mean EEG measure values for this voxel subspace using a statistical model of source localization error and between-subject anatomical variation. Finally, clustering the mean measure voxel values in this locally consistent brain subspace finds brain spatial domains exhibiting distinguishable measure features and provides 3-D maps plus statistical significance estimates for each EEG measure of interest. Applied to sufficient high-quality data, the scalp projections of many maximally independent component (IC) processes contributing to recorded high-density EEG data closely match the projection of a single equivalent dipole located in or near brain cortex. We demonstrate the application of MPA to a multi-subject EEG study decomposed using independent component analysis (ICA), compare the results to k-means IC clustering in EEGLAB (sccn.ucsd.edu/eeglab), and use surrogate data to test MPA robustness. A Measure Projection Toolbox (MPT) plug-in for EEGLAB is available for download (sccn.ucsd.edu/wiki/MPT). Together, MPA and ICA allow use of EEG as a 3-D cortical imaging modality with near-cm scale spatial resolution. PMID:23370059
Detection of Citrus Trees from Uav Dsms
NASA Astrophysics Data System (ADS)
Ok, A. O.; Ozdarici-Ok, A.
2017-05-01
This paper presents an automated approach to detect citrus trees from digitals surface models (DSMs) as a single source. The DSMs in this study are generated from Unmanned Aerial Vehicles (UAVs), and the proposed approach first considers the symmetric nature of the citrus trees, and it computes the orientation-based radial symmetry in an efficient way. The approach also takes into account the local maxima (LM) information to verify the output of the radial symmetry. Our contributions in this study are twofold: (i) Such an integrated approach (symmetry + LM) has not been tested to detect (citrus) trees (in orchards), and (ii) the validity of such an integrated approach has not been experienced for an input, e.g. a single DSM. Experiments are performed on five test patches. The results reveal that our approach is capable of counting most of the citrus trees without manual intervention. Comparison to the state-of-the-art reveals that the proposed approach provides notable detection performance by providing the best balance between precision and recall measures.
Detecting black bear source–sink dynamics using individual-based genetic graphs
Draheim, Hope M.; Moore, Jennifer A.; Etter, Dwayne; Winterstein, Scott R.; Scribner, Kim T.
2016-01-01
Source–sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source–sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source–sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source–sink dynamics and their implications on harvest management of game species. PMID:27440668
Landscape genetic approaches to guide native plant restoration in the Mojave Desert
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2016-01-01
Restoring dryland ecosystems is a global challenge due to synergistic drivers of disturbance coupled with unpredictable environmental conditions. Dryland plant species have evolved complex life-history strategies to cope with fluctuating resources and climatic extremes. Although rarely quantified, local adaptation is likely widespread among these species and potentially influences restoration outcomes. The common practice of reintroducing propagules to restore dryland ecosystems, often across large spatial scales, compels evaluation of adaptive divergence within these species. Such evaluations are critical to understanding the consequences of large-scale manipulation of gene flow and to predicting success of restoration efforts. However, genetic information for species of interest can be difficult and expensive to obtain through traditional common garden experiments. Recent advances in landscape genetics offer marker-based approaches for identifying environmental drivers of adaptive genetic variability in non-model species, but tools are still needed to link these approaches with practical aspects of ecological restoration. Here, we combine spatially-explicit landscape genetics models with flexible visualization tools to demonstrate how cost-effective evaluations of adaptive genetic divergence can facilitate implementation of different seed sourcing strategies in ecological restoration. We apply these methods to Amplified Fragment Length Polymorphism (AFLP) markers genotyped in two Mojave Desert shrub species of high restoration importance: the long-lived, wind-pollinated gymnosperm Ephedra nevadensis, and the short-lived, insect-pollinated angiosperm Sphaeralcea ambigua. Mean annual temperature was identified as an important driver of adaptive genetic divergence for both species. Ephedra showed stronger adaptive divergence with respect to precipitation variability, while temperature variability and precipitation averages explained a larger fraction of adaptive divergence in Sphaeralcea. We describe multivariate statistical approaches for interpolating spatial patterns of adaptive divergence while accounting for potential bias due to neutral genetic structure. Through a spatial bootstrapping procedure, we also visualize patterns in the magnitude of model uncertainty. Finally, we introduce an interactive, distance-based mapping approach that explicitly links marker-based models of adaptive divergence with local or admixture seed sourcing strategies, promoting effective native plant restoration.
A Preprocessor for Modeling Nonpoint Sources in Fractured Media using MODFLOW and MT3D
NASA Astrophysics Data System (ADS)
Mun, Y.; Uchrin, C. G.
2002-05-01
There are a multitude of fractures in the geological structure of fractured media which act as conduits for subsurface fluid flow. The hydraulic properties of this flow are very heterogeneous even within a single unit and this heterogeneity is very localized. As a result, modeling flow in fractured media is difficult due to this heterogeneity. There are two major approaches to simulate the flow and transport of fluid flow in fractured media: the discrete fracture approach and the continuum approach. Precise characteristics such as geometry are required to use the discrete fracture approach. It, however, is difficult to determine the fluid flow through the fractures because of inaccessibility. In the continuum approach, although head distributions can match to well data, chemical concentration distributions are hard to match well sample concentration observations, because some aquifers are dominated by advective transport and others are likely to serve as reservoirs for immobile solutes. The MODFLOW preprocessor described in this paper has been developed and applied to the Cranberry Lake system in Northwestern New Jersey. Cranberry Lake has exhibited eutrophic characteristics for some time by nonpoint sources including surface water runoff, leaching from local septic systems and direct deposition. It has been estimated that 70% of the nutrient loading to the lake flows through fractured media from septic systems. The preprocessor presented in this paper utilizes percolation theory, which is concerned with the existence of ropen paths_. The percolation threshold of a body-centered cubic lattice (3D), a square lattice (2D) and several other percolation numbers are applied to make the model system represent the fractured media. The distribution of hydraulic head within groundwater is simulated by MODFLOW and the advection-dispersion equation of nitrate transport is solved by MT3D. This study also simulates boron transport as an indicator.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
An Overview of Internet biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, David M.; Nelson, Noele P.; Arthur, Ray
Internet biosurveillance utilizes unstructured data from diverse Web-based sources to provide early warning and situational awareness of public health threats. The scope of source coverage ranges from local based media in the vernacular to international media in widely read languages. Internet biosurveillance is a timely modality available to government and public health officials, health care workers, and the public and private sector, serving as a real-time complementary approach to traditional indicator-based public health disease surveillance methods. Internet biosurveillance also supports the broader activity of epidemic intelligence. This review covers the current state of the field of Internet biosurveillance and providesmore » a perspective on the future of the field.« less
Miniaci, M; Gliozzi, A S; Morvan, B; Krushynska, A; Bosia, F; Scalerandi, M; Pugno, N M
2017-05-26
The appearance of nonlinear effects in elastic wave propagation is one of the most reliable and sensitive indicators of the onset of material damage. However, these effects are usually very small and can be detected only using cumbersome digital signal processing techniques. Here, we propose and experimentally validate an alternative approach, using the filtering and focusing properties of phononic crystals to naturally select and reflect the higher harmonics generated by nonlinear effects, enabling the realization of time-reversal procedures for nonlinear elastic source detection. The proposed device demonstrates its potential as an efficient, compact, portable, passive apparatus for nonlinear elastic wave sensing and damage detection.
Surface Penetrating Radar Simulations for Europa
NASA Technical Reports Server (NTRS)
Markus, T.; Gogineni, S. P.; Green, J. L.; Fung, S. F.; Cooper, J. F.; Taylor, W. W. L.; Garcia, L.; Reinisch, B. W.; Song, P.; Benson, R. F.
2004-01-01
The space environment above the icy surface of Europa is a source of radio noise in this frequency range from natural sources in the Jovian magnetosphere. The ionospheric and magnetospheric plasma environment of Europa affects propagation of transmitted and return signals between the spacecraft and the solid surface in a frequency-dependent manner. The ultimate resolution of the subsurface sounding measurements will be determined, in part, by a capability to mitigate these effects. We discuss an integrated multi-frequency approach to active radio sounding of the Europa ionospheric and local magnetospheric environments, based on operational experience from the Radio Plasma Imaging @PI) experiment on the IMAGE spacecraft in Earth orbit, in support of the subsurface measurement objectives.
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Whitham, Charlotte E. L.
2015-01-01
Accurate and spatially-appropriate ecosystem service valuations are vital for decision-makers and land managers. Many approaches for estimating ecosystem service value (ESV) exist, but their appropriateness under specific conditions or logistical limitations is not uniform. The most accurate techniques are therefore not always adopted. Six different assessment approaches were used to estimate ESV for a National Nature Reserve in southwest China, across different management zones. These approaches incorporated two different land-use land cover (LULC) maps and development of three economic valuation techniques, using globally or locally-derived data. The differences in ESV across management zones for the six approaches were largely influenced by the classifications of forest and farmland and how they corresponded with valuation coefficients. With realistic limits on access to time, data, skills and resources, and using acquired estimates from globally-relevant sources, the Buffer zone was estimated as the most valuable (2.494 million ± 1.371 million CNY yr-1 km-2) and the Non-protected zone as the least valuable (770,000 ± 4,600 CNY yr-1 km-2). However, for both LULC maps, when using the locally-based and more time and skill-intensive valuation approaches, this pattern was generally reversed. This paper provides a detailed practical example of how ESV can differ widely depending on the availability and appropriateness of LULC maps and valuation approaches used, highlighting pitfalls for the managers of protected areas. PMID:26086191
Atmospheric methane over Siberia: measurements from the 2014 YAK-AEROSIB aircraft campaign
NASA Astrophysics Data System (ADS)
Paris, Jean-Daniel; Pisso, Ignacio; Ancellet, Gérard; Law, Kathy; Arshinov, Mikhail Yu.; Belan, Boris D.; Nédélec, Philippe; Myhre, Cathrine Lund
2017-04-01
The YAK-AEROSIB program collects high-precision in-situ measurements of the vertical distribution of CO2, CH4, CO, O3, black carbon and ultrafine particles distribution in the Siberian troposphere, as well as other parameters including aerosol lidar profiles, on a pan-Siberian aircraft transect. Recent efforts aim at better understanding the respective role of CH4 emission processes in driving its large scale atmospheric variability over the region. The October 2014 YAK-AEROSIB/MOCA campaign from Novosibirsk to Salekhard and over the Kara sea and the Yamal peninsula sampled air masses affected by local, regional and remote pollution. We analyse the contribution of local anthropogenic sources to measured CH4 enhancements, in relation to atmospheric mixing and transport conditions. Our analysis also attempts to detect CH4 signal from sources of methane in the Siberian shelf and the Arctic ocean during low level flight legs over the Kara sea using the airborne measurements and a Lagrangian model coupled to potential CH4 hydrate and geological sources. The measured CH4 concentrations do not contradict a potential source upstream of our measurements, but the interpretation is challenging due to a very low CH4 signal. The challenging question of the methane budget and its evolution in Siberia leads to a need for new approaches. A new generation of airborne measurements, more flexible, is now needed.
Demographic stability metrics for conservation prioritization of isolated populations.
Finn, Debra S; Bogan, Michael T; Lytle, David A
2009-10-01
Systems of geographically isolated habitat patches house species that occur naturally as small, disjunct populations. Many of these species are of conservation concern, particularly under the interacting influences of isolation and rapid global change. One potential conservation strategy is to prioritize the populations most likely to persist through change and act as sources for future recolonization of less stable localities. We propose an approach to classify long-term population stability (and, presumably, future persistence potential) with composite demographic metrics derived from standard population-genetic data. Stability metrics can be related to simple habitat measures for a straightforward method of classifying localities to inform conservation management. We tested these ideas in a system of isolated desert headwater streams with mitochondrial sequence data from 16 populations of a flightless aquatic insect. Populations exhibited a wide range of stability scores, which were significantly predicted by dry-season aquatic habitat size. This preliminary test suggests strong potential for our proposed method of classifying isolated populations according to persistence potential. The approach is complementary to existing methods for prioritizing local habitats according to diversity patterns and should be tested further in other systems and with additional loci to inform composite demographic stability scores.
47 CFR 11.18 - EAS Designations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...
47 CFR 11.18 - EAS Designations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...
Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
Huang, Wenzhu; Zhang, Wentao; Li, Fang
2013-01-01
This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Lin, H; Chuang, K
2016-06-15
Purpose: To monitor the activity distribution and needle position during and after implantation in operating rooms. Methods: Simulation studies were conducted to assess the feasibility of measurement activity distribution and seed localization using the DuPECT system. The system consists of a LaBr3-based probe and planar detection heads, a collimation system, and a coincidence circuit. The two heads can be manipulated independently. Simplified Yb-169 brachytherapy seeds were used. A water-filled cylindrical phantom with a 40-mm diameter and 40-mm length was used to model a simplified prostate of the Asian man. Two simplified seeds were placed at a radial distance of 10more » mm and tangential distance of 10 mm from the center of the phantom. The probe head was arranged perpendicular to the planar head. Results of various imaging durations were analyzed and the accuracy of the seed localization was assessed by calculating the centroid of the seed. Results: The reconstructed images indicate that the DuPECT can measure the activity distribution and locate the seeds dwelt in different positions intraoperatively. The calculated centroid on average turned out to be accurate within the pixel size of 0.5 mm. The two sources were identified when the duration is longer than 15 s. The sensitivity measured in water was merely 0.07 cps/MBq. Conclusion: Preliminary results show that the measurement of the activity distribution and seed localization are feasible using the DuPECT system intraoperatively. It indicates the DuPECT system has potential to be an approach for dose-distribution-validation. The efficacy of acvtivity distribution measurement and source localization using the DuPECT system will evaluated in more realistic phantom studies (e.g., various attenuation materials and greater number of seeds) in the future investigation.« less
Mapping air quality zones for coastal urban centers.
Freeman, Brian; Gharabaghi, Bahram; Thé, Jesse; Munshed, Mohammad; Faisal, Shah; Abdullah, Meshal; Al Aseed, Athari
2017-05-01
This study presents a new method that incorporates modern air dispersion models allowing local terrain and land-sea breeze effects to be considered along with political and natural boundaries for more accurate mapping of air quality zones (AQZs) for coastal urban centers. This method uses local coastal wind patterns and key urban air pollution sources in each zone to more accurately calculate air pollutant concentration statistics. The new approach distributes virtual air pollution sources within each small grid cell of an area of interest and analyzes a puff dispersion model for a full year's worth of 1-hr prognostic weather data. The difference of wind patterns in coastal and inland areas creates significantly different skewness (S) and kurtosis (K) statistics for the annually averaged pollutant concentrations at ground level receptor points for each grid cell. Plotting the S-K data highlights grouping of sources predominantly impacted by coastal winds versus inland winds. The application of the new method is demonstrated through a case study for the nation of Kuwait by developing new AQZs to support local air management programs. The zone boundaries established by the S-K method were validated by comparing MM5 and WRF prognostic meteorological weather data used in the air dispersion modeling, a support vector machine classifier was trained to compare results with the graphical classification method, and final zones were compared with data collected from Earth observation satellites to confirm locations of high-exposure-risk areas. The resulting AQZs are more accurate and support efficient management strategies for air quality compliance targets effected by local coastal microclimates. A novel method to determine air quality zones in coastal urban areas is introduced using skewness (S) and kurtosis (K) statistics calculated from grid concentrations results of air dispersion models. The method identifies land-sea breeze effects that can be used to manage local air quality in areas of similar microclimates.
Application of Thin-Film Thermocouples to Localized Heat Transfer Measurements
NASA Technical Reports Server (NTRS)
Lepicovsky, J.; Bruckner, R. J.; Smith, F. A.
1995-01-01
The paper describes a proof-of-concept experiment on thin-film thermocouples used for localized heat transfer measurements applicable to experiments on hot parts of turbine engines. The paper has three main parts. The first part describes the thin-film sensors and manufacturing procedures. Attention is paid to connections between thin-film thermocouples and lead wires, which has been a source of problems in the past. The second part addresses the test arrangement and facility used for the heat transfer measurements modeling the conditions for upcoming warm turbine tests at NASA LeRC. The paper stresses the advantages of a modular approach to the test rig design. Finally, we present the results of bulk and local heat flow rate measurements, as well as overall heat transfer coefficients obtained from measurements in a narrow passage with an aspect ratio of 11.8. The comparison of bulk and local heat flow rates confirms applicability of thin-film thermocouples to upcoming warm turbine tests.
The variance of the locally measured Hubble parameter explained with different estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk
We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less
1998-01-01
to their large unit size and to experimental difficulties in determining geometries of carbon-based complex materials because of the weak X - ray ...qualitative relationship between the calculated local density of states and the experimental X - ray pho- toelectron spectra (XPS) and the Bremsstrahlung...from interaction schemes and allows complete data sets from different sources (neutron or X - ray diffraction, chemical constraints) to be fitted. In
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J
2016-01-15
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Galisteo-López, Juan F.
2017-02-01
Controlling the emission of a light source demands acting on its local photonic environment via the local density of states (LDOS). Approaches to exert such control on large scale samples, commonly relying on self-assembly methods, usually lack from a precise positioning of the emitter within the material. Alternatively expensive and time consuming techniques can be used to produce samples of small dimensions where a deterministic control on emitter position can be achieved. In this work we present a full solution process approach to fabricate photonic architectures containing nano-emitters which position can be controlled with nanometer precision over squared milimiter regions. By a combination of spin and dip coating we fabricate one-dimensional (1D) nanoporous photonic crystals, which potential in different fields such as photovoltaics or sensing has been previously reported, containing monolayers of luminescent polymeric nanospheres. We demonstrate how, by modifying the position of the emitters within the photonic crystal, their emission properties (photoluminescence intensity and angular distribution) can be deterministically modified. Further, the nano-emitters can be used as a probe to study the LDOS distribution within these systems with a spatial resolution of 25 nm (provided by the probe size) carrying out macroscopic measurements over squared milimiter regions. Routes to enhance light-matter interaction in this kind of systems by combining them with metallic surfaces are finally discussed.
Puentedura, Emilio J; Flynn, Timothy
2016-07-01
Teaching people with chronic low back pain (CLBP) about the neurobiology and neurophysiology of their pain is referred to as pain neuroscience education (PNE). There is growing evidence that when PNE is provided to patients with chronic musculoskeletal pain, it can result in decreased pain, pain catastrophization, disability, and improved physical performance. Because the aim of PNE is to shift the patient's focus from the tissues in the low back as the source of their pain to the brain's interpretation of inputs, many clinicians could mistakenly believe that PNE should be a "hands-off," education-only approach. An argument can be made that by providing manual therapy or exercise to address local tissue pathology, the patient's focus could be brought back to the low back tissues as the source of their problem. In this narrative literature review, we present the case for a balanced approach that combines PNE with manual therapy and exercise by considering how manual therapy can also be incorporated for interventions with patients with CLBP. We propose that as well as producing local mechanical effects, providing manual therapy within a PNE context can be seen as meeting or perhaps enhancing patient expectations, and also refreshing or sharpening body schema maps within the brain. Ideally, all of this should lead to better outcomes in patients with CLBP.
Engraftment of enteric neural progenitor cells into the injured adult brain.
Belkind-Gerson, Jaime; Hotta, Ryo; Whalen, Michael; Nayyar, Naema; Nagy, Nandor; Cheng, Lily; Zuckerman, Aaron; Goldstein, Allan M; Dietrich, Jorg
2016-01-25
A major area of unmet need is the development of strategies to restore neuronal network systems and to recover brain function in patients with neurological disease. The use of cell-based therapies remains an attractive approach, but its application has been challenging due to the lack of suitable cell sources, ethical concerns, and immune-mediated tissue rejection. We propose an innovative approach that utilizes gut-derived neural tissue for cell-based therapies following focal or diffuse central nervous system injury. Enteric neuronal stem and progenitor cells, able to differentiate into neuronal and glial lineages, were isolated from the postnatal enteric nervous system and propagated in vitro. Gut-derived neural progenitors, genetically engineered to express fluorescent proteins, were transplanted into the injured brain of adult mice. Using different models of brain injury in combination with either local or systemic cell delivery, we show that transplanted enteric neuronal progenitor cells survive, proliferate, and differentiate into neuronal and glial lineages in vivo. Moreover, transplanted cells migrate extensively along neuronal pathways and appear to modulate the local microenvironment to stimulate endogenous neurogenesis. Our findings suggest that enteric nervous system derived cells represent a potential source for tissue regeneration in the central nervous system. Further studies are needed to validate these findings and to explore whether autologous gut-derived cell transplantation into the injured brain can result in functional neurologic recovery.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.
2016-01-01
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934
Harris, Claire; Allen, Kelly; Ramsey, Wayne; King, Richard; Green, Sally
2018-05-30
This is the final paper in a thematic series reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was established to explore a systematic, integrated, evidence-based organisation-wide approach to disinvestment in a large Australian health service network. This paper summarises the findings, discusses the contribution of the SHARE Program to the body of knowledge and understanding of disinvestment in the local healthcare setting, and considers implications for policy, practice and research. The SHARE program was conducted in three phases. Phase One was undertaken to understand concepts and practices related to disinvestment and the implications for a local health service and, based on this information, to identify potential settings and methods for decision-making about disinvestment. The aim of Phase Two was to implement and evaluate the proposed methods to determine which were sustainable, effective and appropriate in a local health service. A review of the current literature incorporating the SHARE findings was conducted in Phase Three to contribute to the understanding of systematic approaches to disinvestment in the local healthcare context. SHARE differed from many other published examples of disinvestment in several ways: by seeking to identify and implement disinvestment opportunities within organisational infrastructure rather than as standalone projects; considering disinvestment in the context of all resource allocation decisions rather than in isolation; including allocation of non-monetary resources as well as financial decisions; and focusing on effective use of limited resources to optimise healthcare outcomes. The SHARE findings provide a rich source of new information about local health service decision-making, in a level of detail not previously reported, to inform others in similar situations. Multiple innovations related to disinvestment were found to be acceptable and feasible in the local setting. Factors influencing decision-making, implementation processes and final outcomes were identified; and methods for further exploration, or avoidance, in attempting disinvestment in this context are proposed based on these findings. The settings, frameworks, models, methods and tools arising from the SHARE findings have potential to enhance health care and patient outcomes.
Fagerholm, Nora; Käyhkö, Niina; Van Eetvelde, Veerle
2013-09-01
In many developing countries, political documentation acknowledges the crucial elements of participation and spatiality for effective land use planning. However, operative approaches to spatial data inclusion and representation in participatory land management are often lacking. In this paper, we apply and develop an integrated landscape characterization approach to enhance spatial knowledge generation about the complex human-nature interactions in landscapes in the context of Zanzibar, Tanzania. We apply an integrated landscape conceptualization as a theoretical framework where the expert and local knowledge can meet in spatial context. The characterization is based on combining multiple data sources in GIS, and involves local communities and their local spatial knowledge since the beginning into the process. Focusing on the expected information needs for community forest management, our characterization integrates physical landscape features and retrospective landscape change data with place-specific community knowledge collected through participatory GIS techniques. The characterization is established in a map form consisting of four themes and their synthesis. The characterization maps are designed to support intuitive interpretation, express the inherently uncertain nature of the data, and accompanied by photographs to enhance communication. Visual interpretation of the characterization mediates information about the character of areas and places in the studied local landscape, depicting the role of forest resources as part of the landscape entity. We conclude that landscape characterization applied in GIS is a highly potential tool for participatory land and resource management, where spatial argumentation, stakeholder communication, and empowerment are critical issues.
Mooney, John D; Holmes, John; Gavens, Lucy; de Vocht, Frank; Hickman, Matt; Lock, Karen; Brennan, Alan
2017-10-18
The considerable challenges associated with implementing national level alcohol policies have encouraged a renewed focus on the prospects for local-level policies in the UK and elsewhere. We adopted a case study approach to identify the major characteristics and drivers of differences in the patterns of local alcohol policies and services in two contrasting local authority (LA) areas in England. Data were collected via thirteen semi-structured interviews with key informants (including public health, licensing and trading standards) and documentary analysis, including harm reduction strategies and statements of licensing policy. A two-stage thematic analysis was used to categorize all relevant statements into seven over-arching themes, by which document sources were then also analysed. Three of the seven over-arching themes (drink environment, treatment services and barriers and facilitators), provided for the most explanatory detail informing the contrasting policy responses of the two LAs: LA1 pursued a risk-informed strategy via a specialist police team working proactively with problem premises and screening systematically to identify riskier drinking. LA2 adopted a more upstream regulatory approach around restrictions on availability with less emphasis on co-ordinated screening and treatment measures. New powers over alcohol policy for LAs in England can produce markedly different policies for reducing alcohol-related harm. These difference are rooted in economic, opportunistic, organisational and personnel factors particular to the LAs themselves and may lead to closely tailored solutions in some policy areas and poorer co-ordination and attention in others.
NASA Astrophysics Data System (ADS)
Fagerholm, Nora; Käyhkö, Niina; Van Eetvelde, Veerle
2013-09-01
In many developing countries, political documentation acknowledges the crucial elements of participation and spatiality for effective land use planning. However, operative approaches to spatial data inclusion and representation in participatory land management are often lacking. In this paper, we apply and develop an integrated landscape characterization approach to enhance spatial knowledge generation about the complex human-nature interactions in landscapes in the context of Zanzibar, Tanzania. We apply an integrated landscape conceptualization as a theoretical framework where the expert and local knowledge can meet in spatial context. The characterization is based on combining multiple data sources in GIS, and involves local communities and their local spatial knowledge since the beginning into the process. Focusing on the expected information needs for community forest management, our characterization integrates physical landscape features and retrospective landscape change data with place-specific community knowledge collected through participatory GIS techniques. The characterization is established in a map form consisting of four themes and their synthesis. The characterization maps are designed to support intuitive interpretation, express the inherently uncertain nature of the data, and accompanied by photographs to enhance communication. Visual interpretation of the characterization mediates information about the character of areas and places in the studied local landscape, depicting the role of forest resources as part of the landscape entity. We conclude that landscape characterization applied in GIS is a highly potential tool for participatory land and resource management, where spatial argumentation, stakeholder communication, and empowerment are critical issues.
Iwami, Michiyo; Ahmad, Raheelah; Castro-Sánchez, Enrique; Birgand, Gabriel; Johnson, Alan P; Holmes, Alison
2017-01-01
Objective (1) To assess the extent to which current English national regulations/policies/guidelines and local hospital practices align with indicators suggested by a European review of effective strategies for infection prevention and control (IPC); (2) to examine the capacity of local hospitals to report on the indicators and current use of data to inform IPC management and practice. Design A national and local-level analysis of the 27 indicators was conducted. At the national level, documentary review of regulations/policies/guidelines was conducted. At the local level data collection comprised: (a) review of documentary sources from 14 hospitals, to determine the capacity to report performance against these indicators; (b) qualitative interviews with 3 senior managers from 5 hospitals and direct observation of hospital wards to find out if these indicators are used to improve IPC management and practice. Setting 2 acute English National Health Service (NHS) trusts and 1 NHS foundation trust (14 hospitals). Participants 3 senior managers from 5 hospitals for qualitative interviews. Primary and secondary outcome measures As primary outcome measures, a ‘Red-Amber-Green’ (RAG) rating was developed reflecting how well the indicators were included in national documents or their availability at the local organisational level. The current use of the indicators to inform IPC management and practice was also assessed. The main secondary outcome measure is any inconsistency between national and local RAG rating results. Results National regulations/policies/guidelines largely cover the suggested European indicators. The ability of individual hospitals to report some of the indicators at ward level varies across staff groups, which may mask required improvements. A reactive use of staffing-related indicators was observed rather than the suggested prospective strategic approach for IPC management. Conclusions For effective patient safety and infection prevention in English hospitals, routine and proactive approaches need to be developed. Our approach to evaluation can be extended to other country settings. PMID:28115331
NASA Astrophysics Data System (ADS)
Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.
2015-02-01
New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.
Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lee, Chih-Chung
2006-12-01
Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.
DOT National Transportation Integrated Search
2014-10-01
Several Virginia localities have used local funding and financing sources to build new roads or complete major street : improvement projects when state and/or federal funding was not available. Many others have combined local funding sources : with s...
NASA Astrophysics Data System (ADS)
Marble, J.; Carroll, K. C.; Brusseau, M. L.; Plaschke, M.; Brinker, F.
2013-12-01
Source zones located in relatively deep, low-permeability formations provide special challenges for remediation. Application of permeable reactive barriers, in-situ thermal, or electrokinetic methods would be expensive and generally impractical. In addition, the use of enhanced mass-removal approaches based on reagent injection (e.g., ISCO, enhanced-solubility reagents) is likely to be ineffective. One possible approach for such conditions is to create a persistent treatment zone for purposes of containment. This study examines the efficacy of this approach for containment and treatment of contaminants in a lower permeability zone using potassium permanganate (KMnO4) as the reactant. A localized 1,1-dichloroethene (DCE) source zone is present in a section of the Tucson International Airport Area (TIAA) Superfund Site. Characterization studies identified the source of DCE to be located in lower-permeability strata adjacent to the water table. Bench-scale studies were conducted using core material collected from boreholes drilled at the site to measure DCE concentrations and determine natural oxidant demand. The reactive zone was created by injecting ~1.7% KMnO4 solution into multiple wells screened within the lower-permeability unit. The site has been monitored for ~8 years to characterize the spatial distribution of DCE and permanganate. KMnO4 continues to persist at the site, demonstrating successful creation of a long-term reactive zone. Additionally, the footprint of the DCE contaminant plume in groundwater has decreased continuously with time. This project illustrates the application of ISCO as a reactive-treatment system for lower-permeability source zones, which appears to effectively mitigate persistent mass flux into groundwater.
Detailed Aggregate Resources Study, Dry Lake Valley, Nevada.
1981-05-29
LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE I COLLECTED WITHIN A FEW MILES OF CORRESPONDING LEDGE-ROCK SOURCES) SUPPLIED FINE MENS...COMPRESSIVE AND TENSILE STh LEDGE-ROCK SOURCES SUPPLIED COARSE AGGREGATES; LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE COLLECTED WITHIN A FEW
Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A
2012-07-01
Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
HealthLit4Kids study protocol; crossing boundaries for positive health literacy outcomes.
Nash, Rose; Elmer, Shandell; Thomas, Katy; Osborne, Richard; MacIntyre, Kate; Shelley, Becky; Murray, Linda; Harpur, Siobhan; Webb, Diane
2018-06-05
Health attitudes and behaviours formed during childhood greatly influence adult health patterns. This paper describes the research and development protocol for a school-based health literacy program. The program, entitled HealthLit4Kids, provides teachers with the resources and supports them to explore the concept of health literacy within their school community, through classroom activities and family and community engagement. HealthLit4Kids is a sequential mixed methods design involving convenience sampling and pre and post intervention measures from multiple sources. Data sources include individual teacher health literacy knowledge, skills and experience; health literacy responsiveness of the school environment (HeLLO Tas); focus groups (parents and teachers); teacher reflections; workshop data and evaluations; and children's health literacy artefacts and descriptions. The HealthLit4Kids protocol draws explicitly on the eight Ophelia principles: outcomes focused, equity driven, co-designed, needs-diagnostic, driven by local wisdom, sustainable, responsive, systematically applied. By influencing on two levels: (1) whole school community; and (2) individual classroom, the HealthLit4Kids program ensures a holistic approach to health literacy, raised awareness of its importance and provides a deeper exploration of health literacy in the school environment. The school-wide health literacy assessment and resultant action plan generates the annual health literacy targets for each participating school. Health promotion cannot be meaningfully achieved in isolation from health literacy. Whilst health promotion activities are common in the school environment, health literacy is not a familiar concept. HealthLit4Kids recognizes that a one-size fits all approach seldom works to address health literacy. Long-term health outcomes are reliant on embedded, locally owned and co-designed programs which respond to local health and health literacy needs.
Awakening the BALROG: BAyesian Location Reconstruction Of GRBs
NASA Astrophysics Data System (ADS)
Burgess, J. Michael; Yu, Hoi-Fung; Greiner, Jochen; Mortlock, Daniel J.
2018-05-01
The accurate spatial location of gamma-ray bursts (GRBs) is crucial for both accurately characterizing their spectra and follow-up observations by other instruments. The Fermi Gamma-ray Burst Monitor (GBM) has the largest field of view for detecting GRBs as it views the entire unocculted sky, but as a non-imaging instrument it relies on the relative count rates observed in each of its 14 detectors to localize transients. Improving its ability to accurately locate GRBs and other transients is vital to the paradigm of multimessenger astronomy, including the electromagnetic follow-up of gravitational wave signals. Here we present the BAyesian Location Reconstruction Of GRBs (BALROG) method for localizing and characterizing GBM transients. Our approach eliminates the systematics of previous approaches by simultaneously fitting for the location and spectrum of a source. It also correctly incorporates the uncertainties in the location of a transient into the spectral parameters and produces reliable positional uncertainties for both well-localized sources and those for which the GBM data cannot effectively constrain the position. While computationally expensive, BALROG can be implemented to enable quick follow-up of all GBM transient signals. Also, we identify possible response problems that require attention and caution when using standard, public GBM detector response matrices. Finally, we examine the effects of including the uncertainty in location on the spectral parameters of GRB 080916C. We find that spectral parameters change and no extra components are required when these effects are included in contrast to when we use a fixed location. This finding has the potential to alter both the GRB spectral catalogues and the reported spectral composition of some well-known GRBs.
Water-sanitation-hygiene mapping: an improved approach for data collection at local level.
Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí
2013-10-01
Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elangasinghe, M. A.; Dirks, K. N.; Singhal, N.; Costello, S. B.; Longley, I.; Salmond, J. A.
2014-02-01
Air pollution from the transport sector has a marked effect on human health, so isolating the pollutant contribution from a roadway is important in understanding its impact on the local neighbourhood. This paper proposes a novel technique based on a semi-empirical air pollution model to quantify the impact from a roadway on the air quality of a local neighbourhood using ambient records of a single air pollution monitor. We demonstrate the proposed technique using a case study, in which we quantify the contribution from a major highway with respect to the local background concentration in Auckland, New Zealand. Comparing the diurnal variation of the model-separated background contribution with real measurements from a site upwind of the highway shows that the model estimates are reliable. Amongst all of the pollutants considered, the best estimations of the background were achieved for nitrogen oxides. Although the multi-pronged approach worked well for predominantly vehicle-related pollutants, it could not be used effectively to isolate emissions of PM10 due to the complex and less predictable influence of natural sources (such as marine aerosols). The proposed approach is useful in situations where ambient records from an upwind background station are not available (as required by other techniques) and is potentially transferable to situations such as intersections and arterial roads. Applying this technique to longer time series could help to understand the changes in pollutant concentrations from the road and background sources for different emission scenarios, for different years or seasons. Modelling results also show the potential of such a hybrid semi-empirical models to contribute to our understanding of the physical parameters determining air quality and to validate emissions inventory data.
NASA Astrophysics Data System (ADS)
Chen, L. A.; Doddridge, B. G.; Dickerson, R. R.
2001-12-01
As the primary field experiment for Maryland Aerosol Research and CHaracterization (MARCH-Atlantic) study, chemically speciated PM2.5 has been sampled at Fort Meade (FME, 39.10° N 76.74° W) since July 1999. FME is suburban, located in the middle of the bustling Baltimore-Washington corridor, which is generally downwind of the highly industrialized Midwest. Due to this unique sampling location, the PM2.5 observed at FME is expected to be of both local and regional sources, with relative contributions varying temporally. This variation, believed to be largely controlled by the meteorology, influences day-to-day or seasonal profiles of PM2.5 mass concentration and chemical composition. Air parcel back trajectories, which describe the path of air parcels traveling backward in time from site (receptor), reflect changes in the synoptic meteorological conditions. In this paper, an ensemble back trajectory method is employed to study the meteorology associated with each high/low PM2.5 episode in different seasons. For every sampling day, the residence time of air parcels within the eastern US at a 1° x 1° x 500 m geographic resolution can be estimated in order to resolve areas likely dominating the production of various PM2.5 components. Local sources are found to be more dominant in winter than in summer. "Factor analysis" is based on mass balance approach, providing useful insights on air pollution data. Here, a newly developed factor analysis model (UNMIX) is used to extract source profiles and contributions from the speciated PM2.5 data. Combing the model results with ensemble back trajectory method improves the understanding of the source regions and helps partition the contributions from local or more distant areas. >http://www.meto.umd.edu/~bruce/MARCH-Atl.html
SU-E-T-366: Clinical Implementation of MR-Guided Vaginal Cylinder Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owrangi, A; Jolly, S; Balter, J
2014-06-01
Purpose: To evaluate the accuracy of MR-based vaginal brachytherapy source localization using an in-house MR-visible marker versus the alignment of an applicator model to MR images. Methods: Three consecutive patients undergoing vaginal HDR brachytherapy with a plastic cylinder were scanned with both CT and MRI (including T1- and T2- weighted images). An MR-visible source localization marker, consisting of a sealed thin catheter filled with either water (for T2 contrast) or Gd-doped water (for T1 contrast), was assembled shortly before scanning. Clinically, the applicator channel was digitized on CT with an x-ray marker. To evaluate the efficacy of MR-based applicator reconstruction,more » each MR image volume was aligned locally to the CT images based on the region containing the cylinder. Applicator digitization was performed on the MR images using (1) the MR visible marker and (2) alignment of an applicator surface model from Varian's Brachytherapy Planning software to the MRI images. Resulting source positions were compared with the original CT digitization. Results: Although the source path was visualized by the MR marker, the applicator tip proved difficult to identify due to challenges in achieving a watertight seal. This resulted in observed displacements of the catheter tip, at times >1cm. Deviations between the central source positions identified via aligning the applicator surface model to MR and using the xray marker on CT ranged from 0.07 – 0.19 cm and 0.07 – 0.20 cm on T1- weighted and T2-weighted images, respectively. Conclusion: Based on the current study, aligning the applicator model to MRI provides a practical, current approach to perform MR-based brachytherapy planning. Further study is needed to produce catheters with reliably and reproducibly identifiable tips. Attempts are being made to improve catheter seals, as well as to increase the viscosity of the contrast material to decrease fluid mobility inside the catheter.« less
Early Growth of Black Walnut Trees From Twenty Seed Sources
Calvin F. Bey; John R. Toliver; Paul L. Roth
1971-01-01
Early results of a black walnut cornseed source study conducted in southern Illinois suggest that seed should be collected from local or south-of-local areas. Trees from southern sources grew faster and longer than trees from northern sources. Trees from southern sources flushed slightly earlier and held their leaves longer than trees from northern sources. For the...
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
[Epidemiological health surveillance among the troops during combat operations in armed conflicts].
Mel'nichenko, P I
1997-08-01
With local wars and armed conflicts the sanitary-epidemiological situation for the troops and local population shows a tendency to worsen. The main objects of the military medical service at the period of deployment are the preventive measures against troops infection from local sources by virus hepatitis A, bacterial dysentery, typhoid, cholera etc. As a rule, combat actions result in communal service destruction, low quality of potable water, soil contamination and worsening sanitary norms and standards. Also, there is a danger of reactivation of the natural centres of infection due to large-scale defence earthworks in the region of operations. The experience of the military medical service in Afghanistan and Chechnya proves, that a multimedia approach to preventive antiepidemic measures is necessary together with the emphasis on the most important actions against infections that represent the biggest danger for the land troops.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
Daigger, Glen T
2009-08-01
Population growth and improving standards of living, coupled with dramatically increased urbanization, are placing increased pressures on available water resources, necessitating new approaches to urban water management. The tradition linear "take, make, waste" approach to managing water increasingly is proving to be unsustainable, as it is leading to water stress (insufficient water supplies), unsustainable resource (energy and chemicals) consumption, the dispersion of nutrients into the aquatic environment (especially phosphorus), and financially unstable utilities. Different approaches are needed to achieve economic, environmental, and social sustainability. Fortunately, a toolkit consisting of stormwater management/rainwater harvesting, water conservation, water reclamation and reuse, energy management, nutrient recovery, and source separation is available to allow more closed-loop urban water and resource management systems to be developed and implemented. Water conservation and water reclamation and reuse (multiple uses) are becoming commonplace in numerous water-short locations. Decentralization, enabled by new, high-performance treatment technologies and distributed stormwater management/rainwater harvesting, is furthering this transition. Likewise, traditional approaches to residuals management are evolving, as higher levels of energy recovery are desired, and nutrient recovery and reuse is to be enhanced. A variety of factors affect selection of the optimum approach for a particular urban area, including local hydrology, available water supplies, water demands, local energy and nutrient-management situations, existing infrastructure, and utility governance structure. A proper approach to economic analysis is critical to determine the most sustainable solutions. Stove piping (i.e., separate management of drinking, storm, and waste water) within the urban water and resource management profession must be eliminated. Adoption of these new approaches to urban water and resource management can lead to more sustainable solutions, defined as financially stable, using locally sustainable water supplies, energy-neutral, providing responsible nutrient management, and with access to clean water and appropriate sanitation for all.
Kananura, Rornald Muhumuza; Ekirapa-Kiracho, Elizabeth; Paina, Ligia; Bumba, Ahmed; Mulekwa, Godfrey; Nakiganda-Busiku, Dinah; Oo, Htet Nay Lin; Kiwanuka, Suzanne Namusoke; George, Asha; Peters, David H
2017-12-28
The use of participatory monitoring and evaluation (M&E) approaches is important for guiding local decision-making, promoting the implementation of effective interventions and addressing emerging issues in the course of implementation. In this article, we explore how participatory M&E approaches helped to identify key design and implementation issues and how they influenced stakeholders' decision-making in eastern Uganda. The data for this paper is drawn from a retrospective reflection of various M&E approaches used in a maternal and newborn health project that was implemented in three districts in eastern Uganda. The methods included qualitative and quantitative M&E techniques such as key informant interviews, formal surveys and supportive supervision, as well as participatory approaches, notably participatory impact pathway analysis. At the design stage, the M&E approaches were useful for identifying key local problems and feasible local solutions and informing the activities that were subsequently implemented. During the implementation phase, the M&E approaches provided evidence that informed decision-making and helped identify emerging issues, such as weak implementation by some village health teams, health facility constraints such as poor use of standard guidelines, lack of placenta disposal pits, inadequate fuel for the ambulance at some facilities, and poor care for low birth weight infants. Sharing this information with key stakeholders prompted them to take appropriate actions. For example, the sub-county leadership constructed placenta disposal pits, the district health officer provided fuel for ambulances, and health workers received refresher training and mentorship on how to care for newborns. Diverse sources of information and perspectives can help researchers and decision-makers understand and adapt evidence to contexts for more effective interventions. Supporting districts to have crosscutting, routine information generating and sharing platforms that bring together stakeholders from different sectors is therefore crucial for the successful implementation of complex development interventions.
An approach to 3D model fusion in GIS systems and its application in a future ECDIS
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhao, Depeng; Pan, Mingyang
2016-04-01
Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.
Citizen Science to Support Community-based Flood Early Warning and Resilience Building
NASA Astrophysics Data System (ADS)
Paul, J. D.; Buytaert, W.; Allen, S.; Ballesteros-Cánovas, J. A.; Bhusal, J.; Cieslik, K.; Clark, J.; Dewulf, A.; Dhital, M. R.; Hannah, D. M.; Liu, W.; Nayaval, J. L.; Schiller, A.; Smith, P. J.; Stoffel, M.; Supper, R.
2017-12-01
In Disaster Risk Management, an emerging shift has been noted from broad-scale, top-down assessments towards more participatory, community-based, bottom-up approaches. Combined with technologies for robust and low-cost sensor networks, a citizen science approach has recently emerged as a promising direction in the provision of extensive, real-time information for flood early warning systems. Here we present the framework and initial results of a major new international project, Landslide EVO, aimed at increasing local resilience against hydrologically induced disasters in western Nepal by exploiting participatory approaches to knowledge generation and risk governance. We identify three major technological developments that strongly support our approach to flood early warning and resilience building in Nepal. First, distributed sensor networks, participatory monitoring, and citizen science hold great promise in complementing official monitoring networks and remote sensing by generating site-specific information with local buy-in, especially in data-scarce regions. Secondly, the emergence of open source, cloud-based risk analysis platforms supports the construction of a modular, distributed, and potentially decentralised data processing workflow. Finally, linking data analysis platforms to social computer networks and ICT (e.g. mobile phones, tablets) allows tailored interfaces and people-centred decision- and policy-support systems to be built. Our proposition is that maximum impact is created if end-users are involved not only in data collection, but also over the entire project life-cycle, including the analysis and provision of results. In this context, citizen science complements more traditional knowledge generation practices, and also enhances multi-directional information provision, risk management, early-warning systems and local resilience building.
On the Vertical Distribution of Local and Remote Sources of Water for Precipitation
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.
2001-01-01
The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.
Towards A Complete Census of Compton-thick AGN and N_H Distribution in the Local Universe
NASA Astrophysics Data System (ADS)
Annuar, A.; Gandhi, P.; Alexander, D.; Asmus, D.; Goulding, A.; Harrison, C.; Lansbury, G.
2014-07-01
Many studies have shown that Compton-thick AGNs (CTAGNs) provide important contribution to the cosmic X-ray background spectrum (˜25% at 20keV). They are expected to dominate the Seyfert 2 population in the local universe, yet only ˜20 bona fide CTAGNs are known. We present an updated census of CTAGN population in the local universe using a volume-limited AGN sample complete to D=15Mpc. Intrinsic relations between 2-10keV X-ray luminosity and mid-IR emission at 12μm, [OIV]λ25.68μm and [NeV]λ14.32μm are investigated, and it is found that the emission at 12μm has the tightest correlation with the X-ray luminosity.Candidates for CTAGN are then selected using this relation and by comparing their 12μm luminosity with the observed X-ray luminosity.We also investigate the Compton-thick nature of these sources using the optical [OIII]λ5007{A}:X-ray diagnostic for comparison, and find that 35-50% of the sample are Compton-thick,of which 10-20% would be missed with the optical approach.Finally, we estimate the intrinsic N_{H} distribution of AGN population in the local universe from this analysis, and show that up to 70% of the sources are heavily obscured (N_{H}>10^{23} cm^{-2}), with ≥50% lying in the Compton-thick regime (N_{H}>10^{24} cm^{-2}).This work provides a well-defined local benchmark for AGN obscuration studies.
Localization in finite vibroimpact chains: Discrete breathers and multibreathers.
Grinberg, Itay; Gendelman, Oleg V
2016-09-01
We explore the dynamics of strongly localized periodic solutions (discrete solitons or discrete breathers) in a finite one-dimensional chain of oscillators. Localization patterns with both single and multiple localization sites (breathers and multibreathers) are considered. The model involves parabolic on-site potential with rigid constraints (the displacement domain of each particle is finite) and a linear nearest-neighbor coupling. When the particle approaches the constraint, it undergoes an inelastic impact according to Newton's impact model. The rigid nonideal impact constraints are the only source of nonlinearity and damping in the system. We demonstrate that this vibro-impact model allows derivation of exact analytic solutions for the breathers and multibreathers with an arbitrary set of localization sites, both in conservative and in forced-damped settings. Periodic boundary conditions are considered; exact solutions for other types of boundary conditions are also available. Local character of the nonlinearity permits explicit derivation of a monodromy matrix for the breather solutions. Consequently, the stability of the derived breather and multibreather solutions can be efficiently studied in the framework of simple methods of linear algebra, and with rather moderate computational efforts. One reveals that that the finiteness of the chain fragment and possible proximity of the localization sites strongly affect both the existence and the stability patterns of these localized solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNab, W; Ezzedine, S; Detwiler, R
2007-02-26
Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less
NASA Astrophysics Data System (ADS)
Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.
2017-10-01
Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
Self-Similar Spin Images for Point Cloud Matching
NASA Astrophysics Data System (ADS)
Pulido, Daniel
The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
NASA Astrophysics Data System (ADS)
Cohen, J. B.; Lan, R.; Lin, C.; Ng, D. H. L.; Lim, A.
2017-12-01
A multi-instrument, inverse modeling approach, is employed to identify and quantify large-scale global biomass urban aerosol emissions profiles. The approach uses MISR, MODIS, OMI and MOPITT, with data from 2006 to 2016, to generate spatial and temporal loads, as well as some information about composition. The method is able to identify regions impacted by stable urban sources, changing urban sources, intense fires, and linear-combinations. Subsequent quantification is a unified field, leading to a less biased profile, with the result not requiring arbitrary scaling to match long-term means. Additionally, the result reasonably reproduces inter and intra annual variation. Both meso-scale (WRF-CHEM) and global (MIT-AERO, multi-mode, multi-mixing state aerosol model) models of aerosol transport, chemistry, and physics, are used to generate resulting 4D aerosol fields. Comparisons with CALIOP, AERONET, and surface chemical and aerosol networks, provide unbiased confirmation, while column and vertical loadings provide additional feedback. There are three significant results. First, there is a reduction in sources over existing urban areas in East Asia. Second, there is an increase in sources over new urban areas in South, South East, and East Asia. Third, that there is an increase in fire sources in South and South East Asia. There are other initial findings relevant to the global tropics, which have not been as deeply investigated. The results improve the model match with both the mean and variation, which is essential if we hope to understand seasonal extremes. The results also quantify impacts of both local and long-range sources. This is of extreme urgency, in particular in developing nations, where there are considerable contributions from long-range or otherwise unknown sources, that impact hundreds of millions of people throughout Asia. It is hoped that the approach provided here can help us to make critical decisions about total sources, as well as point out the many missing scientific and analytical issues still required to address.
Matriarchy, Buddhism, and food security in Sanephong, Thailand.
Sirisai, Solot; Chotiboriboon, Sinee; Sapsuwan, Charana; Tantivatanasathien, Praiwan; Setapun, Nuchjaree; Duangnosan, Prangtong; Thongkam, Nattapach; Chuangyanyong, Sasiwimon
2017-11-01
Sanephong is a matriarchal Karen community located in western Thailand. The community benefits greatly from the availability of local foods, such as cereals, tubers, wild vegetables, mushrooms, fruits, and animals. In the first phase of this project, 387 distinct local foods were identified, which were shown to be good sources of energy, protein, and vitamins. Despite the availability of a variety of nutritious local foods, the majority of households surveyed expressed concern over a decline in local foods due to changing socio-economic and environmental conditions. This study used a qualitative research approach to look at the dual influences of matriarchy and Buddhism on food security in the community. Through this approach, matriarchal values central to the community were adopted as a framework; these included care, consensus, collaboration, and cosmological respect. In Sanephong, women are central to life in the community, and matriarchal cultural practices reflect a nurturing spirit-for both the earth and family. The community practices Buddhism, which is very complementary to the matriarchal system. A type of gift economy within the Buddhist context, known as dhana, transfers food from the wealthy to the poor with no expectation of reciprocity. Consequently, matriarchy and Buddhism jointly promote food security in the community. Studies of matriarchal societies help society-at-large to understand the potential benefits of systems that contrast the current patriarchal paradigm. © 2018 John Wiley & Sons Ltd.
Huang, Ming-Xiong; Nichols, Sharon; Baker, Dewleen G.; Robb, Ashley; Angeles, Annemarie; Yurgil, Kate A.; Drake, Angela; Levy, Michael; Song, Tao; McLay, Robert; Theilmann, Rebecca J.; Diwakar, Mithun; Risbrough, Victoria B.; Ji, Zhengwei; Huang, Charles W.; Chang, Douglas G.; Harrington, Deborah L.; Muzzatti, Laura; Canive, Jose M.; Christopher Edgar, J.; Chen, Yu-Han; Lee, Roland R.
2014-01-01
Traumatic brain injury (TBI) is a leading cause of sustained impairment in military and civilian populations. However, mild TBI (mTBI) can be difficult to detect using conventional MRI or CT. Injured brain tissues in mTBI patients generate abnormal slow-waves (1–4 Hz) that can be measured and localized by resting-state magnetoencephalography (MEG). In this study, we develop a voxel-based whole-brain MEG slow-wave imaging approach for detecting abnormality in patients with mTBI on a single-subject basis. A normative database of resting-state MEG source magnitude images (1–4 Hz) from 79 healthy control subjects was established for all brain voxels. The high-resolution MEG source magnitude images were obtained by our recent Fast-VESTAL method. In 84 mTBI patients with persistent post-concussive symptoms (36 from blasts, and 48 from non-blast causes), our method detected abnormalities at the positive detection rates of 84.5%, 86.1%, and 83.3% for the combined (blast-induced plus with non-blast causes), blast, and non-blast mTBI groups, respectively. We found that prefrontal, posterior parietal, inferior temporal, hippocampus, and cerebella areas were particularly vulnerable to head trauma. The result also showed that MEG slow-wave generation in prefrontal areas positively correlated with personality change, trouble concentrating, affective lability, and depression symptoms. Discussion is provided regarding the neuronal mechanisms of MEG slow-wave generation due to deafferentation caused by axonal injury and/or blockages/limitations of cholinergic transmission in TBI. This study provides an effective way for using MEG slow-wave source imaging to localize affected areas and supports MEG as a tool for assisting the diagnosis of mTBI. PMID:25009772
Carbon footprint of organic beef meat from farm to fork: A case study of short supply chain.
Vitali, A; Grossi, G; Martino, G; Bernabucci, U; Nardone, A; Lacetera, N
2018-04-24
Sustainability of food systems is one of the big challenges of humans kind in the next years. Local food networks, especially the organic ones, are growing worldwide and few information is known about their carbon footprint. This study was aimed to assess greenhouse gases (GHG) emissions associated to organic local beef supply chain with a cradle to grave approach. The study pointed out an overall burden of 24.46 kg CO 2 eq./kg of cooked meat. The breeding and fattening phase accounted 86% of the total emissions and resulted the main hot spot throughout the whole chain. Enteric methane emission was the greatest source of GHG at farm gate (47%). The consumption of meat at home was the second hot spot throughout the chain (9%) and cooking process was the main source within this stage (72%). Retail and slaughtering activities accounted for 4.1% and 1.1% on the whole supply chain, respectively. The identification of GHG hot spots associated to organic beef meat produced and consumed in a local food network may stimulate the debate on environmental issues among the actors involved in the network and direct them toward processes, choices and habits less carbon polluting. This article is protected by copyright. All rights reserved.
Influence of double stimulation on sound-localization behavior in barn owls.
Kettler, Lutz; Wagner, Hermann
2014-12-01
Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.
Towards an Empirically Based Parametric Explosion Spectral Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Walter, W R; Ruppert, S
2009-08-31
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less
Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms
Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary
2013-04-01
The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less
An extended genotyping framework for Salmonella enterica serovar Typhi, the cause of human typhoid
Wong, Vanessa K.; Baker, Stephen; Connor, Thomas R.; Pickard, Derek; Page, Andrew J.; Dave, Jayshree; Murphy, Niamh; Holliman, Richard; Sefton, Armine; Millar, Michael; Dyson, Zoe A.; Dougan, Gordon; Holt, Kathryn E.; Parkhill, Julian; Feasey, Nicholas A.; Kingsley, Robert A.; Thomson, Nicholas R.; Keane, Jacqueline A.; Weill, François- Xavier; Le Hello, Simon; Hawkey, Jane; Edwards, David J.; Harris, Simon R.; Cain, Amy K.; Hadfield, James; Hart, Peter J.; Thieu, Nga Tran Vu; Klemm, Elizabeth J.; Breiman, Robert F.; Watson, Conall H.; Edmunds, W. John; Kariuki, Samuel; Gordon, Melita A.; Heyderman, Robert S.; Okoro, Chinyere; Jacobs, Jan; Lunguya, Octavie; Msefula, Chisomo; Chabalgoity, Jose A.; Kama, Mike; Jenkins, Kylie; Dutta, Shanta; Marks, Florian; Campos, Josefina; Thompson, Corinne; Obaro, Stephen; MacLennan, Calman A.; Dolecek, Christiane; Keddy, Karen H.; Smith, Anthony M.; Parry, Christopher M.; Karkey, Abhilasha; Dongol, Sabina; Basnyat, Buddha; Arjyal, Amit; Mulholland, E. Kim; Campbell, James I.; Dufour, Muriel; Bandaranayake, Don; Toleafoa, Take N.; Singh, Shalini Pravin; Hatta, Mochammad; Newton, Paul N.; Dance, David; Davong, Viengmon; Onsare, Robert S.; Isaia, Lupeoletalalelei; Thwaites, Guy; Wijedoru, Lalith; Crump, John A.; De Pinna, Elizabeth; Nair, Satheesh; Nilles, Eric J.; Thanh, Duy Pham; Turner, Paul; Soeng, Sona; Valcanis, Mary; Powling, Joan; Dimovski, Karolina; Hogg, Geoff; Farrar, Jeremy; Mather, Alison E.; Amos, Ben
2016-01-01
The population of Salmonella enterica serovar Typhi (S. Typhi), the causative agent of typhoid fever, exhibits limited DNA sequence variation, which complicates efforts to rationally discriminate individual isolates. Here we utilize data from whole-genome sequences (WGS) of nearly 2,000 isolates sourced from over 60 countries to generate a robust genotyping scheme that is phylogenetically informative and compatible with a range of assays. These data show that, with the exception of the rapidly disseminating H58 subclade (now designated genotype 4.3.1), the global S. Typhi population is highly structured and includes dozens of subclades that display geographical restriction. The genotyping approach presented here can be used to interrogate local S. Typhi populations and help identify recent introductions of S. Typhi into new or previously endemic locations, providing information on their likely geographical source. This approach can be used to classify clinical isolates and provides a universal framework for further experimental investigations. PMID:27703135
Interactive visualization and analysis of multimodal datasets for surgical applications.
Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James
2012-12-01
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F
2014-09-01
Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. Copyright © 2014 Elsevier Ltd. All rights reserved.
A robust spatial filtering technique for multisource localization and geoacoustic inversion.
Stotts, S A
2005-07-01
Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.
Prediction of a Densely Loaded Particle-Laden Jet using a Euler-Lagrange Dense Spray Model
NASA Astrophysics Data System (ADS)
Pakseresht, Pedram; Apte, Sourabh V.
2017-11-01
Modeling of a dense spray regime using an Euler-Lagrange discrete-element approach is challenging because of local high volume loading. A subgrid cluster of droplets can lead to locally high void fractions for the disperse phase. Under these conditions, spatio-temporal changes in the carrier phase volume fractions, which are commonly neglected in spray simulations in an Euler-Lagrange two-way coupling model, could become important. Accounting for the carrier phase volume fraction variations, leads to zero-Mach number, variable density governing equations. Using pressure-based solvers, this gives rise to a source term in the pressure Poisson equation and a non-divergence free velocity field. To test the validity and predictive capability of such an approach, a round jet laden with solid particles is investigated using Direct Numerical Simulation and compared with available experimental data for different loadings. Various volume fractions spanning from dilute to dense regimes are investigated with and without taking into account the volume displacement effects. The predictions of the two approaches are compared and analyzed to investigate the effectiveness of the dense spray model. Financial support was provided by National Aeronautics and Space Administration (NASA).
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.
2016-01-01
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371
Staff - David L. LePain | Alaska Division of Geological & Geophysical
geothermal energy sources for local use in Alaska: Summary of available information: Alaska Division of fuel and geothermal energy sources for local use in Alaska: Summary of available information: Alaska , J.G., Fossil fuel and geothermal energy sources for local use in Alaska: Summary of available
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
An integrated approach to identify the origin of PM10 exceedances.
Amodio, M; Andriani, E; de Gennaro, G; Demarinis Loiotile, A; Di Gilio, A; Placentino, M C
2012-09-01
This study was aimed to the development of an integrated approach for the characterization of particulate matter (PM) pollution events in the South of Italy. PM(10) and PM(2.5) daily samples were collected from June to November 2008 at an urban background site located in Bari (Puglia Region, South of Italy). Meteorological data, particle size distributions and atmospheric dispersion conditions were also monitored in order to provide information concerning the different features of PM sources. The collected data allowed suggesting four indicators to characterize different PM(10) exceedances. PM(2.5)/PM(10) ratio, natural radioactivity, aerosol maps and back-trajectory analysis and particle distributions were considered in order to evaluate the contribution of local anthropogenic sources and to determine the different origins of intrusive air mass coming from long-range transport, such as African dust outbreaks and aerosol particles from Central and Eastern Europe. The obtained results were confirmed by applying principal component analysis to the number particle concentration dataset and by the chemical characterization of the samples (PM(10) and PM(2.5)). The integrated approach for PM study suggested in this paper can be useful to support the air quality managers for the development of cost-effective control strategies and the application of more suitable risk management approaches.
NASA Astrophysics Data System (ADS)
Li, K. Betty; Goovaerts, Pierre; Abriola, Linda M.
2007-06-01
Contaminant mass discharge across a control plane downstream of a dense nonaqueous phase liquid (DNAPL) source zone has great potential to serve as a metric for the assessment of the effectiveness of source zone treatment technologies and for the development of risk-based source-plume remediation strategies. However, too often the uncertainty of mass discharge estimated in the field is not accounted for in the analysis. In this paper, a geostatistical approach is proposed to estimate mass discharge and to quantify its associated uncertainty using multilevel transect measurements of contaminant concentration (C) and hydraulic conductivity (K). The approach adapts the p-field simulation algorithm to propagate and upscale the uncertainty of mass discharge from the local uncertainty models of C and K. Application of this methodology to numerically simulated transects shows that, with a regular sampling pattern, geostatistics can provide an accurate model of uncertainty for the transects that are associated with low levels of source mass removal (i.e., transects that have a large percentage of contaminated area). For high levels of mass removal (i.e., transects with a few hot spots and large areas of near-zero concentration), a total sampling area equivalent to 6˜7% of the transect is required to achieve accurate uncertainty modeling. A comparison of the results for different measurement supports indicates that samples taken with longer screen lengths may lead to less accurate models of mass discharge uncertainty. The quantification of mass discharge uncertainty, in the form of a probability distribution, will facilitate risk assessment associated with various remediation strategies.
NASA Astrophysics Data System (ADS)
Cetina-Heredia, Paulina; van Sebille, Erik; Matear, Richard J.; Roughan, Moninya
2018-02-01
The Great Australian Bight (GAB), a coastal sea bordered by the Pacific, Southern, and Indian Oceans, sustains one of the largest fisheries in Australia but the geographical origin of nutrients that maintain its productivity is not fully known. We use 12 years of modeled data from a coupled hydrodynamic and biogeochemical model and an Eulerian-Lagrangian approach to quantify nitrate supply to the GAB and the region between the GAB and the Subantarctic Australian Front (GAB-SAFn), identify phytoplankton growth within the GAB, and ascertain the source of nitrate that fuels it. We find that nitrate concentrations have a decorrelation timescale of ˜60 days; since most of the water from surrounding oceans takes longer than 60 days to reach the GAB, 23% and 75% of nitrate used by phytoplankton to grow are sourced within the GAB and from the GAB-SAFn, respectively. Thus, most of the nitrate is recycled locally. Although nitrate concentrations and fluxes into the GAB are greater below 100 m than above, 79% of the nitrate fueling phytoplankton growth is sourced from above 100 m. Our findings suggest that topographical uplift and stratification erosion are key mechanisms delivering nutrients from below the nutricline into the euphotic zone and triggering large phytoplankton growth. We find annual and semiannual periodicities in phytoplankton growth, peaking in the austral spring and autumn when the mixed layer deepens leading to a subsurface maximum of phytoplankton growth. This study highlights the importance of examining phytoplankton growth at depth and the utility of Lagrangian approaches.
NASA Astrophysics Data System (ADS)
Yuan, Zibing; Yadav, Varun; Turner, Jay R.; Louie, Peter K. K.; Lau, Alexis Kai Hon
2013-09-01
Despite extensive emission control measures targeting motor vehicles and to a lesser extent other sources, annual-average PM10 mass concentrations in Hong Kong have remained relatively constant for the past several years and for some air quality metrics, such as the frequency of poor visibility days, conditions have degraded. The underlying drivers for these long-term trends were examined by performing source apportionment on eleven years (1998-2008) of data for seven monitoring sites in the Hong Kong PM10 chemical speciation network. Nine factors were resolved using Positive Matrix Factorization. These factors were assigned to emission source categories that were classified as local (operationally defined as within the Hong Kong Special Administrative Region) or non-local based on temporal and spatial patterns in the source contribution estimates. This data-driven analysis provides strong evidence that local controls on motor vehicle emissions have been effective in reducing motor vehicle-related ambient PM10 burdens with annual-average contributions at neighborhood- and larger-scale monitoring stations decreasing by ˜6 μg m-3 over the eleven year period. However, this improvement has been offset by an increase in annual-average contributions from non-local contributions, especially secondary sulfate and nitrate, of ˜8 μg m-3 over the same time period. As a result, non-local source contributions to urban-scale PM10 have increased from 58% in 1998 to 70% in 2008. Most of the motor vehicle-related decrease and non-local source driven increase occurred over the period 1998-2004 with more modest changes thereafter. Non-local contributions increased most dramatically for secondary sulfate and secondary nitrate factors and thus combustion-related control strategies, including but not limited to power plants, are needed for sources located in the Pearl River Delta and more distant regions to improve air quality conditions in Hong Kong. PMF-resolved source contribution estimates were also used to examine differential contributions of emission source categories during high PM episodes compared to study-average behavior. While contributions from all source categories increased to some extent on high PM days, the increases were disproportionately high for the non-local sources. Thus, controls on emission sources located outside the Hong Kong Special Administrative Region will be needed to effectively decrease the frequency and severity of high PM episodes.
Infrared imaging for tumor detection using antibodies conjugated magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Levy, Arie; Gannot, Israel
2008-04-01
Thermography is a well known approach for cost effective early detection of concourse tumors. However, till now - more than 5 decades after its introduction - it is not considered as a primary tool for cancer early detection, mainly because its poor performance compared to other techniques. This work offers a new thermographic approach for tumor detection which is based on the use of antibody conjugated magnetic nanoparticles ("MNP") as a tumor specific marker. Wename this method "Thermal Beacon Thermography" ("TBT"), and it has the potential to provide considerable advantages over conventional thermographic approach. TBT approach is based on the fact that MNP are producing heat when subjected to an alternating magnetic field ("AMF"). Once these particles are injected to the patient blood stream, they specifically accumulate at the tumor site, providing a local heat source at the tumor that can be activated and deactivated by external control. This heat source can be used as a "thermal beacon" in order to detect and locate tumor by detecting temperature changes at the skin surface using an IR camera and comparing them to a set of pre-calculated numerical predictions. Experiments were conducted using an in vitro tissue model together with industrial inductive heating system and an IR camera. The results shows that this approach can specifically detect small tumor phantom (D=1.5mm) which was embedded below the surface of the tissue phantom.
Using Network Theory to Understand Seismic Noise in Dense Arrays
NASA Astrophysics Data System (ADS)
Riahi, N.; Gerstoft, P.
2015-12-01
Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Harris, I.; Herrero, F. A.; Varosi, F.
1984-01-01
A transfer function approach is taken in constructing a spectral model of the acoustic-gravity wave response in a multiconstituent thermosphere. The model is then applied to describing the thermospheric response to various sources around the globe. Zonal spherical harmonics serve to model the horizontal variations in propagating waves which, when integrated with respect to height, generate a transfer function for a vertical source distribution in the thermosphere. Four wave components are characterized as resonance phenomena and are associated with magnetic activity and ionospheric disturbances. The waves are either trapped or propagate, the latter becoming significant when possessing frequencies above 3 cycles/day. The energy input is distributed by thermospheric winds. The disturbances decay slowly, mainly due to heat conduction and diffusion. Gravity waves appear abruptly and are connected to a sudden switching on or off of a source. Turn off of a source coincides with a reversal of the local atmospheric circulation.
Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.
Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish
2006-01-01
Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.
Crilley, Leigh R; Qadir, Raeed M; Ayoko, Godwin A; Schnelle-Kreis, Jürgen; Abbaszade, Gülcin; Orasche, Jürgen; Zimmermann, Ralf; Morawska, Lidia
2014-08-01
Children are particularly susceptible to air pollution and schools are examples of urban microenvironments that can account for a large portion of children's exposure to airborne particles. Thus this paper aimed to determine the sources of primary airborne particles that children are exposed to at school by analyzing selected organic molecular markers at 11 urban schools in Brisbane, Australia. Positive matrix factorization analysis identified four sources at the schools: vehicle emissions, biomass burning, meat cooking and plant wax emissions accounting for 45%, 29%, 16% and 7%, of the organic carbon respectively. Biomass burning peaked in winter due to prescribed burning of bushland around Brisbane. Overall, the results indicated that both local (traffic) and regional (biomass burning) sources of primary organic aerosols influence the levels of ambient particles that children are exposed at the schools. These results have implications for potential control strategies for mitigating exposure at schools. Copyright © 2014 Elsevier Ltd. All rights reserved.
Min, Zaw; Moser, Stephen A.; Pappas, Peter G.
2012-01-01
Human protothecal infection is uncommon and could be localized or systemic disease. Disseminated Prototheca algaemia tends to occur in immunocompromised patients (solid organ transplants, hematological malignancies) with high mortality. Diagnosis could be missed or delayed due to unusual clinical presentation and/or under-recognition of characteristic microscopic features of Prototheca species. Combined approach that includes removal of source of infection and intravenous amphotericin B provides the best chance of cure. PMID:24432207
The magnetic low of central Europe: analysis and interpretation by a multi scale approach.
NASA Astrophysics Data System (ADS)
Milano, Maurizio; Fedi, Maurizio
2016-04-01
The objective of this work is an interpretation of the European magnetic low (EML) which is the main magnetic anomaly characterizing the magnetic field of central Europe at high-altitude, extending from the eastern France to Poland and placed above the main geological boundary of Europe, the Trans European Suture Zone (TESZ), that separates the western and thinner Paleozoic platform from the eastern and thicker Precambrian platform. In particular, the EML has a relative magnetic high north-east of it, showing a reverse dipolar behavior that many authors tried to interpret in past also by high-altitude satellite exploration. We used an aeromagnetic dataset and employed a level-to-level upward continuation from 1 km up to 200 km, following a multiscale approach thanks to which the anomalies generated by sources placed at different depths can be discriminated. Low-altitude magnetic maps show a complex pattern of high-frequency anomalies up to an altitude of 50 km; then, increasing the altitude up to 200 km, the field simplifies gradually. In order to interpret the anomalies we generated the maps of the total gradient (|T|) of the field at each upward continued altitude, thanks to its property in localizing in a very simple way the edges of the sources and their horizontal position without specifying a priori information about source parameters. From the total gradient maps at low altitude we obtained information about the position of shallow and localized sources producing patterns of small anomalies. In central Europe, most of them have a reverse dipolar behavior, being related probably to metasedimentary rocks in the upper crust containing pyrrhotite and a strong remament component. At higher altitude the total gradient maps has been useful to give a more complex explanation of the EML taking in consideration the results obtained in previous studies. The maps at 150-200 km show that the maximum amplitude of |T| is exactly localized along the TESZ in the NW-SE direction. So, a simple contact model was performed in order to demonstrate that the main source that generates the EML is the complex fault system of the TESZ. However, the |T| maxima are positioned not only along the suture zone, but also in Central Europe, showing that the contributions to the EML derive also from sources placed in the Paleozoic platform with a reverse dipolar aspect. From these results it appears that the contributions responsible for the nature of this anomaly are to be reconnected first to the presence of the TESZ, which puts in contact two different platforms with different thicknesses, and also to the presence of bodies with a strong remanent component, which characterize part of the Central European crust.
Fuselets: an agent based architecture for fusion of heterogeneous information and data
NASA Astrophysics Data System (ADS)
Beyerer, Jürgen; Heizmann, Michael; Sander, Jennifer
2006-04-01
A new architecture for fusing information and data from heterogeneous sources is proposed. The approach takes criminalistics as a model. In analogy to the work of detectives, who attempt to investigate crimes, software agents are initiated that pursue clues and try to consolidate or to dismiss hypotheses. Like their human pendants, they can, if questions beyond their competences arise, consult expert agents. Within the context of a certain task, region, and time interval, specialized operations are applied to each relevant information source, e.g. IMINT, SIGINT, ACINT,..., HUMINT, data bases etc. in order to establish hit lists of first clues. Each clue is described by its pertaining facts, uncertainties, and dependencies in form of a local degree-of-belief (DoB) distribution in a Bayesian sense. For each clue an agent is initiated which cooperates with other agents and experts. Expert agents support to make use of different information sources. Consultations of experts, capable to access certain information sources, result in changes of the DoB of the pertaining clue. According to the significance of concentration of their DoB distribution clues are abandoned or pursued further to formulate task specific hypotheses. Communications between the agents serve to find out whether different clues belong to the same cause and thus can be put together. At the end of the investigation process, the different hypotheses are evaluated by a jury and a final report is created that constitutes the fusion result. The approach proposed avoids calculating global DoB distributions by adopting a local Bayesian approximation and thus reduces the complexity of the exact problem essentially. Different information sources are transformed into DoB distributions using the maximum entropy paradigm and considering known facts as constraints. Nominal, ordinal and cardinal quantities can be treated within this framework equally. The architecture is scalable by tailoring the number of agents according to the available computer resources, to the priority of tasks, and to the maximum duration of the fusion process. Furthermore, the architecture allows cooperative work of human and automated agents and experts, as long as not all subtasks can be accomplished automatically.
Estimate of main local sources to ambient ultrafine particle number concentrations in an urban area
NASA Astrophysics Data System (ADS)
Rahman, Md Mahmudur; Mazaheri, Mandana; Clifford, Sam; Morawska, Lidia
2017-09-01
Quantifying and apportioning the contribution of a range of sources to ultrafine particles (UFPs, D < 100 nm) is a challenge due to the complex nature of the urban environments. Although vehicular emissions have long been considered one of the major sources of ultrafine particles in urban areas, the contribution of other major urban sources is not yet fully understood. This paper aims to determine and quantify the contribution of local ground traffic, nucleated particle (NP) formation and distant non-traffic (e.g. airport, oil refineries, and seaport) sources to the total ambient particle number concentration (PNC) in a busy, inner-city area in Brisbane, Australia using Bayesian statistical modelling and other exploratory tools. The Bayesian model was trained on the PNC data on days where NP formations were known to have not occurred, hourly traffic counts, solar radiation data, and smooth daily trend. The model was applied to apportion and quantify the contribution of NP formations and local traffic and non-traffic sources to UFPs. The data analysis incorporated long-term measured time-series of total PNC (D ≥ 6 nm), particle number size distributions (PSD, D = 8 to 400 nm), PM2.5, PM10, NOx, CO, meteorological parameters and traffic counts at a stationary monitoring site. The developed Bayesian model showed reliable predictive performances in quantifying the contribution of NP formation events to UFPs (up to 4 × 104 particles cm- 3), with a significant day to day variability. The model identified potential NP formation and no-formations days based on PNC data and quantified the sources contribution to UFPs. Exploratory statistical analyses show that total mean PNC during the middle of the day was up to 32% higher than during peak morning and evening traffic periods, which were associated with NP formation events. The majority of UFPs measured during the peak traffic and NP formation periods were between 30-100 nm and smaller than 30 nm, respectively. To date, this is the first application of Bayesian model to apportion different sources contribution to UFPs, and therefore the importance of this study is not only in its modelling outcomes but in demonstrating the applicability and advantages of this statistical approach to air pollution studies.
NASA Astrophysics Data System (ADS)
Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang
2007-03-01
The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.
2018-06-01
Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
Alternative transportation funding sources available to Virginia localities.
DOT National Transportation Integrated Search
2006-01-01
In 2003, the Virginia Department of Transportation developed a list of alternative transportation funding sources available to localities in Virginia. Alternative funding sources are defined as those that are not included in the annual interstate, pr...
The proton and helium anomalies in the light of the Myriad model
NASA Astrophysics Data System (ADS)
Salati, Pierre; Génolini, Yoann; Serpico, Pasquale; Taillet, Richard
2017-03-01
A hardening of the proton and helium fluxes is observed above a few hundreds of GeV/nuc. The distribution of local sources of primary cosmic rays has been suggested as a potential solution to this puzzling behavior. Some authors even claim that a single source is responsible for the observed anomalies. But how probable these explanations are? To answer that question, our current description of cosmic ray Galactic propagation needs to be replaced by the Myriad model. In the former approach, sources of protons and helium nuclei are treated as a jelly continuously spread over space and time. A more accurate description is provided by the Myriad model where sources are considered as point-like events. This leads to a probabilistic derivation of the fluxes of primary species, and opens the possibility that larger-than-average values may be observed at the Earth. For a long time though, a major obstacle has been the infinite variance associated to the probability distribution function which the fluxes follow. Several suggestions have been made to cure this problem but none is entirely satisfactory. We go a step further here and solve the infinite variance problem of the Myriad model by making use of the generalized central limit theorem. We find that primary fluxes are distributed according to a stable law with heavy tail, well-known to financial analysts. The probability that the proton and helium anomalies are sourced by local SNR can then be calculated. The p-values associated to the CREAM measurements turn out to be small, unless somewhat unrealistic propagation parameters are assumed.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
NASA Astrophysics Data System (ADS)
Lai, Dakun; Sun, Jian; Li, Yigang; He, Bin
2013-06-01
As radio frequency (RF) catheter ablation becomes increasingly prevalent in the management of ventricular arrhythmia in patients, an accurate and rapid determination of the arrhythmogenic site is of important clinical interest. The aim of this study was to test the hypothesis that the inversely reconstructed ventricular endocardial current density distribution from body surface potential maps (BSPMs) can localize the regions critical for maintenance of a ventricular ectopic activity. Patients with isolated and monomorphic premature ventricular contractions (PVCs) were investigated by noninvasive BSPMs and subsequent invasive catheter mapping and ablation. Equivalent current density (CD) reconstruction (CDR) during symptomatic PVCs was obtained on the endocardial ventricular surface in six patients (four men, two women, years 23-77), and the origin of the spontaneous ectopic activity was localized at the location of the maximum CD value. Compared with the last (successful) ablation site (LAS), the mean and standard deviation of localization error of the CDR approach were 13.8 and 1.3 mm, respectively. In comparison, the distance between the LASs and the estimated locations of an equivalent single moving dipole in the heart was 25.5 ± 5.5 mm. The obtained CD distribution of activated sources extending from the catheter ablation site also showed a high consistency with the invasively recorded electroanatomical maps. The noninvasively reconstructed endocardial CD distribution is suitable to predict a region of interest containing or close to arrhythmia source, which may have the potential to guide RF catheter ablation.
ERIC Educational Resources Information Center
Busch, Douglas M.
2012-01-01
As school district revenues are reduced by state allocating agencies, local school district administrators and school boards frequently evaluate alternative sources of possible revenue. One emerging source of revenue that many school districts explore is a local education foundation. Local education foundations are 501(c)(3) nonprofit…
NASA Astrophysics Data System (ADS)
Liu, Dan; Shi, Tielin; Xi, Shuang; Lai, Wuxing; Liu, Shiyuan; Li, Xiaoping; Tang, Zirong
2012-09-01
The evolution of silica nanostructure morphology induced by local Si vapor source concentration gradient has been investigated by a smart design of experiments. Silica nanostructure or their assemblies with different morphologies are obtained on photoresist-derived three-dimensional carbon microelectrode array. At a temperature of 1,000°C, rope-, feather-, and octopus-like nanowire assemblies can be obtained along with the Si vapor source concentration gradient flow. While at 950°C, stringlike assemblies, bamboo-like nanostructures with large joints, and hollow structures with smaller sizes can be obtained along with the Si vapor source concentration gradient flow. Both vapor-liquid-solid and vapor-quasiliquid-solid growth mechanisms have been applied to explain the diverse morphologies involving branching, connecting, and batch growth behaviors. The present approach offers a potential method for precise design and controlled synthesis of nanostructures with different features.
Development of a statewide trauma registry using multiple linked sources of data.
Clark, D. E.
1993-01-01
In order to develop a cost-effective method of injury surveillance and trauma system evaluation in a rural state, computer programs were written linking records from two major hospital trauma registries, a statewide trauma tracking study, hospital discharge abstracts, death certificates, and ambulance run reports. A general-purpose database management system, programming language, and operating system were used. Data from 1991 appeared to be successfully linked using only indirect identifying information. Familiarity with local geography and the idiosyncracies of each data source were helpful in programming for effective matching of records. For each individual case identified in this way, data from all available sources were then merged and imported into a standard database format. This inexpensive, population-based approach, maintaining flexibility for end-users with some database training, may be adaptable for other regions. There is a need for further improvement and simplification of the record-linkage process for this and similar purposes. PMID:8130556
NASA Astrophysics Data System (ADS)
Dowling, David R.; Sabra, Karim G.
2015-01-01
Acoustic waves carry information about their source and collect information about their environment as they propagate. This article reviews how these information-carrying and -collecting features of acoustic waves that travel through fluids can be exploited for remote sensing. In nearly all cases, modern acoustic remote sensing involves array-recorded sounds and array signal processing to recover multidimensional results. The application realm for acoustic remote sensing spans an impressive range of signal frequencies (10-2 to 107 Hz) and distances (10-2 to 107 m) and involves biomedical ultrasound imaging, nondestructive evaluation, oil and gas exploration, military systems, and Nuclear Test Ban Treaty monitoring. In the past two decades, approaches have been developed to robustly localize remote sources; remove noise and multipath distortion from recorded signals; and determine the acoustic characteristics of the environment through which the sound waves have traveled, even when the recorded sounds originate from uncooperative sources or are merely ambient noise.
O’Connor, Richard J.; Cummings, K. Michael; Rees, Vaughan W.; Connolly, Gregory N.; Norton, Kaila J.; Sweanor, David; Parascandola, Mark; Hatsukami, Dorothy K.; Shields, Peter G.
2015-01-01
Tobacco products are widely sold and marketed, yet integrated data systems for identifying, tracking, and characterizing products are lacking. Tobacco manufacturers recently have developed potential reduction exposure products (PREPs) with implied or explicit health claims. Currently, a systematic approach for identifying, defining, and evaluating PREPs sold at the local, state or national levels in the US has not been developed. Identifying, characterizing, and monitoring new tobacco products could be greatly enhanced with a responsive surveillance system. This paper critically reviews available surveillance data sources for identifying and tracking tobacco products, including PREPs, evaluating strengths and weaknesses of potential data sources in light of their reliability and validity. Absent regulations mandating disclosure of product-specific information, it is likely that public health officials will need to rely on a variety of imperfect data sources to help identify, characterize, and monitor tobacco products, including PREPs. PMID:19959680
Near- Source, Seismo-Acoustic Signals Accompanying a NASCAR Race at the Texas Motor Speedway
NASA Astrophysics Data System (ADS)
Stump, B. W.; Hayward, C.; Underwood, R.; Howard, J. E.; MacPhail, M. D.; Golden, P.; Endress, A.
2014-12-01
Near-source, seismo-acoustic observations provide a unique opportunity to characterize urban sources, remotely sense human activities including vehicular traffic and monitor large engineering structures. Energy separately coupled into the solid earth and atmosphere provides constraints on not only the location of these sources but also the physics of the generating process. Conditions and distances at which these observations can be made are dependent upon not only local geological conditions but also atmospheric conditions at the time of the observations. In order to address this range of topics, an empirical, seismo-acoustic study was undertaken in and around the Texas Motor Speedway in the Dallas-Ft. Worth area during the first week of April 2014 at which time a range of activities associated with a series of NASCAR races occurred. Nine, seismic sensors were deployed around the 1.5-mile track for purposes of documenting the direct-coupled seismic energy from the passage of the cars and other vehicles on the track. Six infrasound sensors were deployed on a rooftop in a rectangular array configuration designed to provide high frequency beam forming for acoustic signals. Finally, a five-element infrasound array was deployed outside the track in order to characterize how the signals propagate away from the sources in the near-source region. Signals recovered from within the track were able to track and characterize the motion of a variety of vehicles during the race weekend including individual racecars. Seismic data sampled at 1000 sps documented strong Doppler effects as the cars approached and moved away from individual sensors. There were faint seismic signals that arrived at seismic velocity but local acoustic to seismic coupling as supported by the acoustic observations generated the majority of seismic signals. Actual seismic ground motions were small as demonstrated by the dominance of regional seismic signals from a magnitude 4.0 earthquake that arrived at the local seismometers as the race began. The infrasound arrays recorded a variety of atmosphere only processes including substantial helicopter traffic although the array outside the track did not capture the details of the race as a result of the rapid attenuation of high frequency signals.
NASA Astrophysics Data System (ADS)
Huang, M.; Bowman, K. W.; Carmichael, G. R.; Lee, M.; Park, R.; Henze, D. K.; Chai, T.; Flemming, J.; Lin, M.; Weinheimer, A. J.; Wisthaler, A.; Jaffe, D. A.
2014-12-01
Near-surface ozone in the western US can be sensitive to transported background pollutants from the free troposphere over the eastern Pacific, as well as various local emissions sources. Accurately estimating ozone source contributions in this region has strong policy-relevant significance as the air quality standards tend to go down. Here we improve modeled contributions from local and non-local sources to western US ozone base on the HTAP2 (Task Force on Hemispheric Transport of Air Pollution) multi-model experiment, along with multi-scale chemical data assimilation. We simulate western US air quality using the STEM regional model on a 12 km horizontal resolution grid, during the NASA ARCTAS field campaign period in June 2008. STEM simulations use time-varying boundary conditions downscaled from global GEOS-Chem model simulations. Standard GEOS-Chem simulation overall underpredicted ozone at 1-5 km in the eastern Pacific, resulting in underestimated contributions from the transported background pollutants to surface ozone inland. These negative biases can be reduced by using the output from several global models that support the HTAP2 experiment, which all ran with the HTAP2 harmonized emission inventory and also calculated the contributions from east Asian anthropogenic emissions. We demonstrate that the biases in GEOS-Chem boundary conditions can be more efficiently reduced via assimilating satellite ozone profiles from the Tropospheric Emission Spectrometer (TES) instrument using the three dimensional variational (3D-Var) approach. Base upon these TES-constrained GEOS-Chem boundary conditions, we then update regional nitrogen dioxide and isoprene emissions in STEM through the four dimensional variational (4D-Var) assimilation of the Ozone Monitoring Instrument (OMI) nitrogen dioxide columns and the NASA DC-8 aircraft isoprene measurements. The 4D-Var assimilation spatially redistributed the emissions of nitrogen oxides and isoprene from various US sources, and in the meantime updated the modeled ozone and its US source contributions. Compared with available independent measurements (e.g., ozone observed on the DC-8 aircraft, and at EPA and Mt. Bachelor monitoring stations) during this period, modeled ozone fields after the multi-scale assimilation show overall improvement.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Cool-Flame Burning and Oscillations of Envelope Diffusion Flames in Microgravity
NASA Astrophysics Data System (ADS)
Takahashi, Fumiaki; Katta, Viswanath R.; Hicks, Michael C.
2018-05-01
The two-stage combustion, local extinction, and flame-edge oscillations have been observed in single-droplet combustion tests conducted on the International Space Station. To understand such dynamic behavior of initially enveloped diffusion flames in microgravity, two-dimensional (axisymmetric) computation is performed for a gaseous n-heptane flame using a time-dependent code with a detailed reaction mechanism (127 species and 1130 reactions), diffusive transport, and a simple radiation model (for CO2, H2O, CO, CH4, and soot). The calculated combustion characteristics vary profoundly with a slight movement of air surrounding a fuel source. In a near-quiescent environment (≤ 2 mm/s), with a sufficiently large fuel injection velocity (1 cm/s), extinction of a growing spherical diffusion flame due to radiative heat losses is predicted at the flame temperature at ≈ 1200 K. The radiative extinction is typically followed by a transition to the "cool flame" burning regime (due to the negative temperature coefficient in the low-temperature chemistry) with a reaction zone (at ≈ 700 K) in close proximity to the fuel source. By contrast, if there is a slight relative velocity (≈ 3 mm/s) between the fuel source and the air, a local extinction of the envelope diffusion flame is predicted downstream at ≈ 1200 K, followed by periodic flame-edge oscillations. At higher relative velocities (4 to 10 mm/s), the locally extinguished flame becomes steady state. The present 2D computational approach can help in understanding further the non-premixed "cool flame" structure and flame-flow interactions in microgravity environments.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
Lepper, Paul A; D'Spain, Gerald L
2007-08-01
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.
Characterization and Local Emission Sources for Ammonia in an Urban Environment.
Galán Madruga, D; Fernández Patier, R; Sintes Puertas, M A; Romero García, M D; Cristóbal López, A
2018-04-01
Ammonia levels were evaluated in the urban environment of Madrid City, Spain. A total of 110 samplers were distributed throughout the city. Vehicle traffic density, garbage containers and sewers were identified as local emission sources of ammonia. The average ammonia concentrations were 4.66 ± 2.14 µg/m 3 (0.39-11.23 µg/m 3 range) in the winter and 5.30 ± 1.81 µg/m 3 (2.33-11.08 µg/m 3 range) in the summer. Spatial and seasonal variations of ammonia levels were evaluated. Hotspots were located in the south and center of Madrid City in both winter and summer seasons, with lower ammonia concentrations located in the north (winter) and in the west and east (summer). The number of representative points that were needed to establish a reliable air quality monitoring network for ammonia was determined using a combined clustering and kriging approach. The results indicated that 40 samplers were sufficient to provide a reliable estimate for Madrid City.
Development of morphogen gradient: The role of dimension and discreteness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teimouri, Hamid; Kolomeisky, Anatoly B.
2014-02-28
The fundamental processes of biological development are governed by multiple signaling molecules that create non-uniform concentration profiles known as morphogen gradients. It is widely believed that the establishment of morphogen gradients is a result of complex processes that involve diffusion and degradation of locally produced signaling molecules. We developed a multi-dimensional discrete-state stochastic approach for investigating the corresponding reaction-diffusion models. It provided a full analytical description for stationary profiles and for important dynamic properties such as local accumulation times, variances, and mean first-passage times. The role of discreteness in developing of morphogen gradients is analyzed by comparing with available continuummore » descriptions. It is found that the continuum models prediction about multiple time scales near the source region in two-dimensional and three-dimensional systems is not supported in our analysis. Using ideas that view the degradation process as an effective potential, the effect of dimensionality on establishment of morphogen gradients is also discussed. In addition, we investigated how these reaction-diffusion processes are modified with changing the size of the source region.« less
Handheld real-time volumetric 3-D gamma-ray imaging
NASA Astrophysics Data System (ADS)
Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai
2017-06-01
This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.
Agarwal, Rajat Kumar; Sedai, Amit; Dhimal, Sunil; Ankita, Kumari; Clemente, Luigi; Siddique, Sulman; Yaqub, Naila; Khalid, Sadaf; Itrat, Fatima; Khan, Anwar; Gilani, Sarah Khan; Marwah, Priya; Soni, Rajpreet; Missiry, Mohamed El; Hussain, Mohamed Hamed; Uderzo, Cornelio; Faulkner, Lawrence
2014-01-01
Jagriti Innovations developed a collaboration tool in partnership with the Cure2Children Foundation that has been used by health professionals in Italy, Pakistan, and India for the collaborative management of patients undergoing bone marrow transplantation (BMT) for thalassemia major since August 2008. This online open-access database covers data recording, analyzing, and reporting besides enabling knowledge exchange, telemedicine, capacity building, and quality assurance. As of February 2014, over 2400 patients have been registered and 112 BMTs have been performed with outcomes comparable to international standards, but at a fraction of the cost. This approach avoids medical emigration and contributes to local healthcare strengthening and competitiveness. This paper presents the experience and clinical outcomes associated with the use of this platform built using open-source tools and focusing on a locally pertinent tertiary care procedure-BMT. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Terrazas, Enrique; Hamill, Timothy R.; Wang, Ye; Channing Rodgers, R. P.
2007-01-01
The Department of Laboratory Medicine at the University of California, San Francisco (UCSF) has been split into widely separated facilities, leading to much time being spent traveling between facilities for meetings. We installed an open-source AccessGrid multi-media-conferencing system using (largely) consumer-grade equipment, connecting 6 sites at 5 separate facilities. The system was accepted rapidly and enthusiastically, and was inexpensive compared to alternative approaches. Security was addressed by aspects of the AG software and by local network administrative practices. The chief obstacles to deployment arose from security restrictions imposed by multiple independent network administration regimes, requiring a drastically reduced list of network ports employed by AG components. PMID:18693930
Geometric charges in theories of elasticity and plasticity
NASA Astrophysics Data System (ADS)
Moshe, Michael
The mechanics of many natural systems is governed by localized sources of stresses. Examples include ''plastic events'' that occur in amorphous solids under external stress, defects formation in crystalline material, and force-dipoles applied by cells adhered to an elastic substrate. Recent developments in a geometric formulation of elasticity theory paved the way for a unifying mathematical description of such singular sources of stress, as ''elastic charges''. In this talk I will review basic results in this emerging field, focusing on the geometry and mechanics of elastic charges in two-dimensional solid bodies. I will demonstrate the applicability of this new approach in three different problems: failure of an amorphous solid under load, mechanics of Kirigami, and wrinkle patterns in geometrically-incompatible elastic sheets.