Sample records for localized source problems

  1. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  2. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  3. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  4. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  5. Source localization in an ocean waveguide using supervised machine learning.

    PubMed

    Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter

    2017-09-01

    Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.

  6. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    PubMed

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  7. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  8. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  9. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  10. Local Responses to Global Problems: A Key to Meeting Basic Human Needs. Worldwatch Paper 17.

    ERIC Educational Resources Information Center

    Stokes, Bruce

    The booklet maintains that the key to meeting basic human needs is the participation of individuals and communities in local problem solving. Some of the most important achievements in providing food, upgrading housing, improving human health, and tapping new energy sources, comes through local self-help projects. Proponents of local efforts at…

  11. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.

    PubMed

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-10-27

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.

  12. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments

    PubMed Central

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-01-01

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071

  13. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  14. Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.

    PubMed

    Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh

    2018-04-27

    Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.

  15. Effect of conductor geometry on source localization: Implications for epilepsy studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlitt, H.; Heller, L.; Best, E.

    1994-07-01

    We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less

  16. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  17. Rio Grande Basin and the modern world: Understanding scale and context

    Treesearch

    Joseph A. Tainter

    1999-01-01

    Environmental problems are social issues, embedded in economic and political contexts at the local, regional, national, and global levels. Placing environmental issues on the scale from local to global clarifies conflicts between the level at which problems originate and the level at which they must be addressed. Local issues today often originate in sources distant in...

  18. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  19. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  20. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  1. [EEG source localization using LORETA (low resolution electromagnetic tomography)].

    PubMed

    Puskás, Szilvia

    2011-03-30

    Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.

  2. Subspace-based analysis of the ERT inverse problem

    NASA Astrophysics Data System (ADS)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  3. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-07-04

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  4. Adaptive behaviors in multi-agent source localization using passive sensing.

    PubMed

    Shaukat, Mansoor; Chitre, Mandar

    2016-12-01

    In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.

  5. The Serial Perplex.

    ERIC Educational Resources Information Center

    Blackwell, Maree Macon; Chopra, Pearl

    The problems associated with the acquisition of periodicals from various sources and different systems used by two University of Alabama libraries for the acquisition, controlling, and recording of serials are described in this report. Sources identified and discussed include local sources and suppliers, direct subscriptions placed with…

  6. Pollution of water sources due to poor waste management--the case of Dar-es-Salaam.

    PubMed

    Makule, D E

    2000-01-01

    Pollution of water sources for the city of Dar-es-Salaam originates from haphazard disposal of solid wastes, discharge of untreated or inadequately treated wastewater to water sources, lack of standard sanitary facilities and poor hygienic practices. Contaminated water used for human consumption can lead to serious health problems e.g. cholera, typhoid, skin diseases, etc., which, in turn, leads to reduced working hours/manpower. This has a direct effect to production output which can lead to a deterioration of local community welfare. Having realised this as a problem, the Government of Tanzania stipulated, in its water policy of 1991, the need for protection of water sources. In achieving this goal, proper waste management was singled out to be of vital importance. Due to economic hardships, however, budget allocation by the central Government could not cover the costs needed for proper handling of waste. This left Tanzania with no alternative other than heavy reliance on donor and bilateral organisations for financial support of programmes. Nevertheless, these sources of funds proved to be unreliable for many different reasons. To deal with these problems, the Government currently emphasises involving local community and NGOs, the formation of stakeholder funds and organisations, and involvement of the private sector. Other efforts are intensification of education programmes to create more awareness to the local communities on the need for protection of water sources. Although at its infancy level, the system is showing some signs of improvement.

  7. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  8. Determination of the Geometric Form of a Plane of a Tectonic Gap as the Inverse III-posed Problem of Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Sirota, Dmitry; Ivanov, Vadim

    2017-11-01

    Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.

  9. Impact localization in dispersive waveguides based on energy-attenuation of waves with the traveled distance

    NASA Astrophysics Data System (ADS)

    Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo

    2018-05-01

    An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.

  10. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  11. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  12. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  13. Education Finance Reform. Voices for Illinois Children Special Report.

    ERIC Educational Resources Information Center

    Nagle, Ami; Kim, Robert

    This special report reviews problems in Illinois' education funding system and discusses potential solutions to these problems. The report notes that the fundamental problem with the current education finance system is an over-reliance on local property taxes. Although property taxes are a relatively stable and lucrative revenue source,…

  14. Community-Scale Air Toxics Ambient Monitoring Grant - Closed Announcement FY 2015

    EPA Pesticide Factsheets

    Grant to fund projects designed to assist state, local and tribal communities in identifying air toxics sources, characterizing the degree and extent of local-scale air toxics problems, tracking progress of air toxics reduction activities, etc.

  15. Dipole source localization of event-related brain activity indicative of an early visual selective attention deficit in ADHD children.

    PubMed

    Jonkman, L M; Kenemans, J L; Kemner, C; Verbaten, M N; van Engeland, H

    2004-07-01

    This study was aimed at investigating whether attention-deficit hyperactivity disorder (ADHD) children suffer from specific early selective attention deficits in the visual modality with the aid of event-related brain potentials (ERPs). Furthermore, brain source localization was applied to identify brain areas underlying possible deficits in selective visual processing in ADHD children. A two-channel visual color selection task was administered to 18 ADHD and 18 control subjects in the age range of 7-13 years and ERP activity was derived from 30 electrodes. ADHD children exhibited lower perceptual sensitivity scores resulting in poorer target selection. The ERP data suggested an early selective-attention deficit as manifested in smaller frontal positive activity (frontal selection positivity; FSP) in ADHD children around 200 ms whereas later occipital and fronto-central negative activity (OSN and N2b; 200-400 ms latency) appeared to be unaffected. Source localization explained the FSP by posterior-medial equivalent dipoles in control subjects, which may reflect the contribution of numerous surrounding areas. ADHD children have problems with selective visual processing that might be caused by a specific early filtering deficit (absent FSP) occurring around 200 ms. The neural sources underlying these problems have to be further identified. Source localization also suggested abnormalities in the 200-400 ms time range, pertaining to the distribution of attention-modulated activity in lateral frontal areas.

  16. Marine Debris: An Opportunity for Marine Education.

    ERIC Educational Resources Information Center

    Heneman, Burr

    1989-01-01

    Provides awareness information on the sources, types, and serious effects on wildlife of nondegradable materials which are causing local as well as global problems. Discusses current ways the problem is being attacked and educational implications for environmental programs. (RT)

  17. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  18. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  19. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  20. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  1. Target detection and localization in shallow water: an experimental demonstration of the acoustic barrier problem at the laboratory scale.

    PubMed

    Marandet, Christian; Roux, Philippe; Nicolas, Barbara; Mars, Jérôme

    2011-01-01

    This study demonstrates experimentally at the laboratory scale the detection and localization of a wavelength-sized target in a shallow ultrasonic waveguide between two source-receiver arrays at 3 MHz. In the framework of the acoustic barrier problem, at the 1/1000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. The two coplanar arrays record in the time-domain the transfer matrix of the waveguide between each pair of source-receiver transducers. Invoking the reciprocity principle, a time-domain double-beamforming algorithm is simultaneously performed on the source and receiver arrays. This array processing projects the multireverberated acoustic echoes into an equivalent set of eigenrays, which are defined by their launch and arrival angles. Comparison is made between the intensity of each eigenray without and with a target for detection in the waveguide. Localization is performed through tomography inversion of the acoustic impedance of the target, using all of the eigenrays extracted from double beamforming. The use of the diffraction-based sensitivity kernel for each eigenray provides both the localization and the signature of the target. Experimental results are shown in the presence of surface waves, and methodological issues are discussed for detection and localization.

  2. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  3. Clean the Air and Breathe Easier.

    ERIC Educational Resources Information Center

    Guevin, John

    1997-01-01

    Failure to prevent indoor air quality problems or act promptly can result in increased chances for long- or short-term health problems for staff and students, reduced productivity, faster plant deterioration, and strained school-community relations. Basic pollution control measures include source management, local exhausts, ventilation, exposure…

  4. Source apportionment analysis of air pollutants using CMAQ/BFM for national air quality management policy over Republic of Korea.

    NASA Astrophysics Data System (ADS)

    Moon, N.; Kim, S.; Seo, J.; Lee, Y. J.

    2017-12-01

    Recently, the Korean government is focusing on solving air pollution problem such as fine particulate matter and ozone. Korea has high population density and concentrated industrial complex in its limited land space. For better air quality management, it is important to understand source and contribution relation to target pollutant. The air quality analysis representing the mutual contribution among the local regions enables to understand the substantive state of the air quality of a region in association with neighboring regions. Under this background, the source apportionment of PM10, PM2.5, O3, NO2, SO2 using WRF and CMAQ/BFM was analyzed over Korea and BFM was applied to mobile, area and point sources in each local government. The contribution rate from neighboring region showed different pattern for each pollutant. In case of primary pollutants such as NO2, SO2, local source contribution is dominant, on the other hand secondary pollutants case especially O3, contribution from neighboring region is higher than that from source region itself. Local source contribution to PM10 showed 20-25% and the contribution rate to O3 has big difference with different meteorological condition year after year. From this study, we tried to estimate the conversion rate between source (NOx, VOC, SO2, NH3, PMC, PM2.5, CO) and concentration (PM10, PM2.5, O3, NO2, SO2,) by regional group over Korea. The result can contribute to the decision-making process of important national planning related to large-scale industrial developments and energy supply policies (eg., operations of coal-fired power plants and diesel cars) and emission control plan, where many controversies and concerns are currently concentrated among local governments in Korea. With this kind of approach, various environmental and social problems related to air quality can also be identified early so that a sustainable and environmentally sound plan can be established by providing data infrastructures to be utilized by central government agencies, local governments, and even private sectors.

  5. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  6. Public Records as a Source of Employment Information.

    ERIC Educational Resources Information Center

    Cohen, Harvey S.

    1978-01-01

    Although public records are an ethical source of employee screening information, restrictions placed on employers by the many state, local, and federal regulations and laws require new ways of getting this information. Particularly important is verifying dates and criminal records. Some problems and solutions are given. (MF)

  7. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  8. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    PubMed

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  9. Ambiguity resolving based on cosine property of phase differences for 3D source localization with uniform circular array

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang

    2017-07-01

    Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.

  10. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  11. Summary of hydrologic conditions of the Louisville area of Kentucky

    USGS Publications Warehouse

    Bell, Edwin Allen

    1966-01-01

    Water problems and their solutions have been associated with the growth and development of the Louisville area for more than a century. Many hydrologic data that aided water users in the past can be applied to present water problems and will be helpful for solving many similar problems in the future. Most of the water problems of Louisville, a water-rich area, concern management and are associated with the distribution of supplies, the quality of water, drainage, and waste disposal. The local hydrologic system at Louisville is dominated by the Ohio River and the glacial-outwash deposits beneath its flood plain. The water-bearing limestones in the uplands are ,secondary sources of water. The average flow of the Ohio River at Louisville, 73 billion gallons per day, and the potential availability of 370 million gallons per day of ground water suitable for industrial cooling purposes minimize the chance of acute water shortage in the area. Under current development, use of water averages about 211 million gallons per day, excluding about 392 million gallons of Ohio River water circulated daily through steampower plants and returned directly to the river. Optimum use and control of the water resources will be dependent on solving several water problems. The principal sources of water are in the Ohio River bottom land, whereas the new and potential centers of use are in the uplands. Either water must be piped to these new centers from the present sources or new supplies must be developed. Available data on streamflow and ground water are adequate to plan for the development of small local supplies. Since the completion of floodwalls and levees in 1953, widespread damage from flooding is a thing of the past in the Louisville area. Some local flooding of unprotected areas and of lowlands along tributary streams still takes place. The analyses of streamflow data are useful in planning for protection of these areas, but additional streamflow records and flood-area mapping are needed to best solve the problem. Droughts are a problem only to users of small water supplies in the uplands, where additional water either can be imported or developed locally. Pollution and undesirable chemical quality of water for some uses are the most serious drawbacks to the optimum development of the water resources in Louisville and Jefferson County. Available chemical analyses of ground water are useful for determining its suitability for various uses, but additional data are needed to guide management decisions. Sources of contamination should be inventoried and water samples analyzed periodically to monitor changes in quality.

  12. Flow of Funds Modeling for Localized Financial Markets: An Application of Spatial Price and Allocation Activity Analysis Models.

    DTIC Science & Technology

    1981-01-01

    on modeling the managerial aspects of the firm. The second has been the application to economic theory led by ...individual portfolio optimization problems which were embedded in a larger global optimization problem. In the global problem, portfolios were linked by market ...demand quantities or be given by linear demand relationships. As in~ the source markets , the model

  13. Integration and Optimization of Alternative Sources of Energy in a Remote Region

    NASA Astrophysics Data System (ADS)

    Berberi, Pellumb; Inodnorjani, Spiro; Aleti, Riza

    2010-01-01

    In a remote coastal region supply of energy from national grid is insufficient for a sustainable development. Integration and optimization of local alternative renewable energy sources is an optional solution of the problem. In this paper we have studied the energetic potential of local sources of renewable energy (water, solar, wind and biomass). A bottom-up energy system optimization model is proposed in order to support planning policies for promoting the use of renewable energy sources. A software, based on multiple factors and constrains analysis for optimization energy flow is proposed, which provides detailed information for exploitation each source of energy, power and heat generation, GHG emissions and end-use sectors. Economical analysis shows that with existing technologies both stand alone and regional facilities may be feasible. Improving specific legislation will foster investments from Central or Local Governments and also from individuals, private companies or small families. The study is carried on the frame work of a FP6 project "Integrated Renewable Energy System."

  14. The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.

    PubMed

    Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael

    2015-08-01

    The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.

  15. Method of monaural localization of the acoustic source direction from the standpoint of the active perception theory

    NASA Astrophysics Data System (ADS)

    Gai, V. E.; Polyakov, I. V.; Krasheninnikov, M. S.; Koshurina, A. A.; Dorofeev, R. A.

    2017-01-01

    Currently, the scientific and educational center of the “Transport” of NNSTU performs work on the creation of the universal rescue vehicle. This vehicle is a robot, and intended to reduce the number of human victims in accidents on offshore oil platforms. An actual problem is the development of a method for determining the location of a person overboard in low visibility conditions, when a traditional vision is not efficient. One of the most important sensory robot systems is the acoustic sensor system, because it is omnidirectional and does not require finding of an acoustic source in visibility scope. Features of the acoustic sensor robot system can complement the capabilities of the video sensor in the solution of the problem of localization of a person or some event in the environment. This paper describes the method of determination of the direction of the acoustic source using just one microphone. The proposed method is based on the active perception theory.

  16. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  17. Effectiveness of focused source generation methods with consideration of interaural time and level difference.

    PubMed

    Zheng, Jianwen; Lu, Jing; Chen, Kai

    2013-07-01

    Several methods have been proposed for the generation of the focused source, usually a virtual monopole source positioned in between the loudspeaker array and the listener. The problem of pre-echoes of the common analytical methods has been noticed, and the most concise method to cope with this problem is the angular weight method. In this paper, the interaural time and level difference, which are well related to the localization cues of human auditory systems, will be used to further investigate the effectiveness of the focused source generation methods. It is demonstrated that the combination of angular weight method and the numerical pressure matching method has comparatively better performance in a given reconstructed area.

  18. How to Gather Information on Community Needs and Funding Sources. Resources for Rural Development Series: Handbook No. 1.

    ERIC Educational Resources Information Center

    Cohen, John M.; Marshall, Terry

    One of a series designed to aid community leaders, cooperative extension agents, local government officials, and others in their efforts to gain external resources needed to support local efforts in rural development, this handbook addresses three basic problem areas: gathering information on rural development needs of a community; locating…

  19. Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution

    PubMed Central

    Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.

    2013-01-01

    With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980

  20. Multiscale Spatial Modeling of Human Exposure from Local Sources to Global Intake.

    PubMed

    Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier

    2018-01-16

    Exposure studies, used in human health risk and impact assessments of chemicals, are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea, an innovative multiscale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties are expressed as the evolution of the population intake fraction through inhalation and ingestion as a function of the distance from sources. For substances with atmospheric half-lives longer than a week, less than 20% of the global population intake through inhalation (median of 126 emission scenarios) can occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ∼10 000 emission locations covering France more densely to determine per chemical and exposure route which locations minimize global intakes. Maps of global intake fractions associated with each emission location show clear patterns associated with population and agriculture production densities.

  1. Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Yu, Z.

    The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

  2. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  3. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  4. Understanding enabling capacities for managing the 'wicked problem' of nonpoint source water pollution in catchments: a conceptual framework.

    PubMed

    Patterson, James J; Smith, Carl; Bellamy, Jennifer

    2013-10-15

    Nonpoint source (NPS) water pollution in catchments is a 'wicked' problem that threatens water quality, water security, ecosystem health and biodiversity, and thus the provision of ecosystem services that support human livelihoods and wellbeing from local to global scales. However, it is a difficult problem to manage because water catchments are linked human and natural systems that are complex, dynamic, multi-actor, and multi-scalar in nature. This in turn raises questions about understanding and influencing change across multiple levels of planning, decision-making and action. A key challenge in practice is enabling implementation of local management action, which can be influenced by a range of factors across multiple levels. This paper reviews and synthesises important 'enabling' capacities that can influence implementation of local management action, and develops a conceptual framework for understanding and analysing these in practice. Important enabling capacities identified include: history and contingency; institutional arrangements; collaboration; engagement; vision and strategy; knowledge building and brokerage; resourcing; entrepreneurship and leadership; and reflection and adaptation. Furthermore, local action is embedded within multi-scalar contexts and therefore, is highly contextual. The findings highlight the need for: (1) a systemic and integrative perspective for understanding and influencing change for managing the wicked problem of NPS water pollution; and (2) 'enabling' social and institutional arenas that support emergent and adaptive management structures, processes and innovations for addressing NPS water pollution in practice. These findings also have wider relevance to other 'wicked' natural resource management issues facing similar implementation challenges. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  6. Changing perceptions of protected area benefits and problems around Kibale National Park, Uganda.

    PubMed

    MacKenzie, Catrina A; Salerno, Jonathan; Hartter, Joel; Chapman, Colin A; Reyna, Rafael; Tumusiime, David Mwesigye; Drake, Michael

    2017-09-15

    Local residents' changing perceptions of benefits and problems from living next to a protected area in western Uganda are assessed by comparing household survey data from 2006, 2009, and 2012. Findings are contextualized and supported by long-term data sources for tourism, protected area-based employment, tourism revenue sharing, resource access agreements, and problem animal abundance. We found decreasing perceived benefit and increasing perceived problems associated with the protected area over time, with both trends dominated by increased human-wildlife conflict due to recovering elephant numbers. Proportions of households claiming benefit from specific conservation strategies were increasing, but not enough to offset crop raiding. Ecosystem services mitigated perceptions of problems. As human and animal populations rise, wildlife authorities in Sub-Saharan Africa will be challenged to balance perceptions and adapt policies to ensure the continued existence of protected areas. Understanding the dynamic nature of local people's perceptions provides a tool to adapt protected area management plans, prioritize conservation resources, and engage local communities to support protected areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  8. Computationally efficient method for localizing the spiral rotor source using synthetic intracardiac electrograms during atrial fibrillation.

    PubMed

    Shariat, M H; Gazor, S; Redfearn, D

    2015-08-01

    Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.

  9. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  10. Integrating local research watersheds into hydrologic education: Lessons from the Dry Creek Experimental Watershed

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Aishlin, P. S.; Flores, A. N.; Benner, S. G.; Marshall, H. P.; Pierce, J. L.

    2014-12-01

    While a proliferation of instrumented research watersheds and new data sharing technologies has transformed hydrologic research in recent decades, similar advances have not been realized in hydrologic education. Long-standing problems in hydrologic education include discontinuity of hydrologic topics from introductory to advanced courses, inconsistency of content across academic departments, and difficulties in development of laboratory and homework assignments utilizing large time series and spatial data sets. Hydrologic problems are typically not amenable to "back-of-the-chapter" examples. Local, long-term research watersheds offer solutions to these problems. Here, we describe our integration of research and monitoring programs in the Dry Creek Experimental Watershed into undergraduate and graduate hydrology programs at Boise State University. We developed a suite of watershed-based exercises into courses and curriculums using real, tangible datasets from the watershed to teach concepts not amenable to traditional textbook and lecture methods. The aggregation of exercises throughout a course or degree allows for scaffolding of concepts with progressive exposure of advanced concepts throughout a course or degree. The need for exercises of this type is growing as traditional lecture-based classes (passive learning from a local authoritative source) are being replaced with active learning courses that integrate many sources of information through situational factors.

  11. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  12. Evaluation of fluoride levels in bottled water and their contribution to health and teeth problems in the United Arab Emirates.

    PubMed

    Abouleish, Mohamed Yehia Z

    2016-10-01

    Fluoride is needed for better health, yet if ingested at higher levels it may lead to health problems. Fluoride can be obtained from different sources, with drinking water being a major contributor. In the United Arab Emirates (UAE), bottled water is the major source for drinking. The aim of this research is to measure fluoride levels in different bottled water brands sold in UAE, to determine whether fluoride contributes to better health or health problems. The results were compared to international and local standards. Fluoride was present in seven out of 23 brands. One brand exhibited high fluoride levels, which exceeded all standards, suggesting it may pose health problems. Other brands were either below or above standards, suggesting either contribution to better health or health problems, depending on ingested amount. A risk assessment suggested a potential for non-cancer effects from some brands. The results were compared to fluoride levels in bottled water sold in UAE and neighboring countries (e.g. Saudi Arabia, Qatar, Kuwait, and Bahrain), over 24 years, to reflect on changes in fluoride levels in bottled water in this region. The research presents the need for creating, stricter regulations that require careful fluoride monitoring and new regulations that require listing fluoride level on the bottled water label, internationally and regionally. The research will have local and global health impact, as bottled water sold in UAE and neighboring countries, is produced locally and imported from international countries, e.g. Switzerland, the USA, France, Italy, New Zealand, and Fiji.

  13. Enabling and enacting 'practical action' in catchments: responding to the 'wicked problem' of nonpoint source pollution in coastal subtropical Australia.

    PubMed

    Patterson, James J; Smith, Carl; Bellamy, Jennifer

    2015-02-01

    Enabling and enacting 'practical action' (i.e., purposeful and concerted collective action) in catchments is a key challenge in responding to a wide range of pressing catchment and natural resource management (NRM) issues. It is particularly a challenge in responding to 'wicked problems,' where generating action is not straightforward and cannot be brought about solely by any single actor, policy or intervention. This paper responds to the critical need to better understand how practical action can be generated in catchments, by conducting an in-depth empirical case study of efforts to manage nonpoint source (NPS) pollution in South East Queensland (SEQ), Australia. SEQ has seen substantial concerted efforts to manage waterway and catchment issues over two decades, yet NPS pollution remains a major problem for waterway health. A novel framework was applied to empirically analyze practical action in three local catchment cases embedded within the broader SEQ region. The analysis focuses on 'enabling capacities' underpinning practical action in catchments. Findings reveal that capacities manifested in different ways in different cases, yet many commonalities also occurred across cases. Interplay between capacities was critical to the emergence of adaptive and contextual forms of practical action in all cases. These findings imply that in order to enable and enact practical action in catchments, it is vital to recognize and support a diversity of enabling capacities across both local and regional levels of decision making and action. This is likely to have relevance for other 'wicked' catchment and NRM problems requiring local responses within broader multiscalar regional problem situations.

  14. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  15. A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method

    NASA Astrophysics Data System (ADS)

    Skinner, M. A.; Ostriker, E. C.

    2013-04-01

    We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.

  16. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  17. Source localization in electromyography using the inverse potential problem

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  18. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  19. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    PubMed

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  1. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  2. A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.

    PubMed

    Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott

    2012-01-01

    Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.

  3. Time-dependent wave splitting and source separation

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck

    2017-02-01

    Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.

  4. The Green's functions for peridynamic non-local diffusion.

    PubMed

    Wang, L J; Xu, J F; Wang, J X

    2016-09-01

    In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.

  5. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  6. Legal and financial methods for reducing low emission sources: Options for incentives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samitowski, W.

    1995-12-31

    There are two types of the so-called low emission sources in Cracow: over 1,000 local boiler houses and several thousand solid fuel-fired stoves. The accomplishment of each of 5 sub-projects offered under the American-Polish program entails solving the technical, financial, legal and public relations-related problems. The elimination of the low emission source requires, therefore, a joint effort of the following pairs: (a) local authorities, (b) investors, (c) owners and users of low emission sources, and (d) inhabitants involved in particular projects. The results of the studies developed by POLINVEST indicate that the accomplishment of the projects for the elimination ofmore » low emission sources will require financial incentives. Bearing in mind the today`s resources available from the community budget, this process may last as long as a dozen or so years. The task of the authorities of Cracow City is making a long-range operational strategy enabling reduction of low emission sources in Cracow.« less

  7. In Search for Sustainable Coastal Management: A Case Study of Semarang, Indonesia

    NASA Astrophysics Data System (ADS)

    Hadi, Sudharto P.

    2017-02-01

    As a coastal town, Semarang is currently facing environmental problems such as flood, tidal flood (locally called rob), coastal abrasion, emerging land, land subsidence and sea water intrusion. These phenomena severely affect to citizen, community and corporate, disrupting day to day activities, threatening people’s health, causing economics’ burden and reducing property value. Government policies in dealing with these problem are focused on its phenomena such as normalizing river for flood and building polder systems for tidal flood. Impacted people have been implementing various initiatives. People in Tanah Mas Estate set up collective efforts to reduce tidal flood by building pumping system project, while people in Kampong Tambaklorok conduct a regular mutual assistance in cleaning of waste and sedimentation, rehabilitating of local drainages and dikes, reconstructing of local streets and also maintaining of pumping system. People in Mangunharjo, the district of Tugu build a coastal belt and cultivate mangrove. Various government and local initiatives have been effective in dealing with flood and tidal flood temporarily. More comprehensive approaches and focused on the sources of problems are required to achieve sustainable coastal management.

  8. An HL7/CDA Framework for the Design and Deployment of Telemedicine Services

    DTIC Science & Technology

    2001-10-25

    schemes and prescription databases. Furthermore, interoperability with the Electronic Health Re- cord ( EHR ) facilitates automatic retrieval of relevant...local EHR system or the integrated electronic health record (I- EHR ) [9], which indexes all medical contacts of a patient in the regional net- work...suspected medical problem. Interoperability with middleware services of the HII and other data sources such as the local EHR sys- tem affects

  9. Quantum transport through disordered 1D wires: Conductance via localized and delocalized electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopar, Víctor A.

    Coherent electronic transport through disordered systems, like quantum wires, is a topic of fundamental and practical interest. In particular, the exponential localization of electron wave functions-Anderson localization-due to the presence of disorder has been widely studied. In fact, Anderson localization, is not an phenomenon exclusive to electrons but it has been observed in microwave and acoustic experiments, photonic materials, cold atoms, etc. Nowadays, many properties of electronic transport of quantum wires have been successfully described within a scaling approach to Anderson localization. On the other hand, anomalous localization or delocalization is, in relation to the Anderson problem, a less studiedmore » phenomenon. Although one can find signatures of anomalous localization in very different systems in nature. In the problem of electronic transport, a source of delocalization may come from symmetries present in the system and particular disorder configurations, like the so-called Lévy-type disorder. We have developed a theoretical model to describe the statistical properties of transport when electron wave functions are delocalized. In particular, we show that only two physical parameters determine the complete conductance distribution.« less

  10. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  11. Treatment of internal sources in the finite-volume ELLAM

    USGS Publications Warehouse

    Healy, R.W.; ,; ,; ,; ,; ,

    2000-01-01

    The finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) is a mass-conservative approach for solving the advection-dispersion equation. The method has been shown to be accurate and efficient for solving advection-dominated problems of solute transport in ground water in 1, 2, and 3 dimensions. Previous implementations of FVELLAM have had difficulty in representing internal sources because the standard assumption of lowest order Raviart-Thomas velocity field does not hold for source cells. Therefore, tracking of particles within source cells is problematic. A new approach has been developed to account for internal sources in FVELLAM. It is assumed that the source is uniformly distributed across a grid cell and that instantaneous mixing takes place within the cell, such that concentration is uniform across the cell at any time. Sub-time steps are used in the time-integration scheme to track mass outflow from the edges of the source cell. This avoids the need for tracking within the source cell. We describe the new method and compare results for a test problem with a wide range of cell Peclet numbers.

  12. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    NASA Astrophysics Data System (ADS)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  13. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  14. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  15. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  16. Localization of source with unknown amplitude using IPMC sensor arrays

    NASA Astrophysics Data System (ADS)

    Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo

    2011-04-01

    The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.

  17. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  18. [Case study of red water phenomenon in drinking water distribution systems caused by water source switch].

    PubMed

    Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Pan, An-jun; Xu, Yang; Liao, Ping-an; Zhang, Su-xia; Gu, Jun-nong

    2009-12-01

    Red water phenomenon occurred in some communities of a city in China after water source switch in recent days. The origin of this red water problem and mechanism of iron release were investigated in the study. Water quality of local and new water sources was tested and tap water quality in suffered area had been monitored for 3 months since red water occurred. Interior corrosion scales on the pipe which was obtained from the suffered area were analyzed by XRD, SEM, and EDS. Corrosion rates of cast iron under the conditions of two source water were obtained by Annular Reactor. The influence of different source water on iron release was studied by pipe section reactor to simulate the distribution systems. The results indicated that large increase of sulfate concentration by water source shift was regarded as the cause of red water problem. The Larson ratio increased from about 0.4 to 1.7-1.9 and the red water problem happened in the taps of some urban communities just several days after the new water source was applied. The mechanism of iron release was concluded that the stable shell of scales in the pipes had been corrupted by this kind of high-sulfate-concentration source water and it was hard to recover soon spontaneously. The effect of sulfate on iron release of the old cast iron was more significant than its effect on enhancing iron corrosion. The rate of iron release increased with increasing Larson ratio, and the correlation of them was nonlinear on the old cast-iron. The problem remained quite a long time even if the water source re-shifted into the blended one with only small ratio of the new source and the Larson ratio reduced to about 0.6.

  19. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  20. The Green’s functions for peridynamic non-local diffusion

    PubMed Central

    Wang, L. J.; Xu, J. F.

    2016-01-01

    In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658

  1. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  2. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less

  3. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less

  4. Dynamic Response of a Magnetized Plasma to AN External Source: Application to Space and Solid State Plasmas

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-Bei

    This dissertation examines the dynamic response of a magnetoplasma to an external time-dependent current source. To achieve this goal a new method which combines analytic and numerical techniques to study the dynamic response of a 3-D magnetoplasma to a time-dependent current source imposed across the magnetic field was developed. The set of the cold electron and/or ion plasma equations and Maxwell's equations are first solved analytically in (k, omega)^ace; inverse Laplace and 3 -D complex Fast Fourier Transform (FFT) techniques are subsequently used to numerically transform the radiation fields and plasma currents from the (k, omega) ^ace to the (r, t) space. The dynamic responses of the electron plasma and of the compensated two-component plasma to external current sources are studied separately. The results show that the electron plasma responds to a time -varying current source imposed across the magnetic field by exciting whistler/helicon waves and forming of an expanding local current loop, induced by field aligned plasma currents. The current loop consists of two anti-parallel field-aligned current channels concentrated at the ends of the imposed current and a cross-field current region connecting these channels. The latter is driven by an electron Hall drift. A compensated two-component plasma responds to the same current source as following: (a) For slow time scales tau > Omega_sp{i}{-1} , it generates Alfven waves and forms a non-local current loop in which the ion polarization currents dominate the cross-field current; (b) For fast time scales tau < Omega_sp{i}{-1} , the dynamic response of the compensated two-component plasma is the same as that of the electron plasma. The characteristics of the current closure region are determined by the background plasma density, the magnetic field and the time scale of the current source. This study has applications to a diverse range of space and solid state plasma problems. These problems include current closure in emf inducing tethered satellite systems (TSS), generation of ELF/VLF waves by ionospheric heating, current closure and quasineutrality in thin magnetopause transitions, and short electromagnetic pulse generation in solid state plasmas. The cross-field current in TSS builds up on a time scale corresponding to the whistler waves and results in local current closure. Amplitude modulated HF ionospheric heating generates ELF/VLF waves by forming a horizontal magnetic dipole. The dipole is formed by the current closure in the modified region. For thin transition the time-dependent cross-field polarization field at the magnetopause could be neutralized by the formation of field aligned current loops that close by a cross-field electron Hall current. A moving current source in a solid state plasma results in microwave emission if the speed of the source exceeds the local phase velocity of the helicon or Alfven waves. Detailed analysis of the above problems is presented in the thesis.

  5. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    PubMed

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  6. Protecting Our Own. Community Child Passenger Safety Programs.

    ERIC Educational Resources Information Center

    National Highway Traffic Safety Administration (DOT), Washington, DC.

    This manual provides information on implementing a local child passenger safety program. It covers understanding the problems and solutions; deciding what can be done; planning and carrying out a project; providing adequate, accurate, and current technical information; and reaching additional sources of information. Chapter 1 provides community…

  7. Strange metal from local quantum chaos

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Daniel; McGreevy, John

    2018-04-01

    How to make a model of a non-Fermi-liquid metal with efficient current dissipation is a long-standing problem. Results from holographic duality suggest a framework where local critical fermionic degrees of freedom provide both a source of decoherence for the Landau quasiparticle, and a sink for its momentum. This leads us to study a Kondo lattice type model with SYK models in place of the spin impurities. We find evidence for a stable phase at intermediate couplings.

  8. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  9. Quantum Change Point

    NASA Astrophysics Data System (ADS)

    Sentís, Gael; Bagan, Emilio; Calsamiglia, John; Chiribella, Giulio; Muñoz-Tapia, Ramon

    2016-10-01

    Sudden changes are ubiquitous in nature. Identifying them is crucial for a number of applications in biology, medicine, and social sciences. Here we take the problem of detecting sudden changes to the quantum domain. We consider a source that emits quantum particles in a default state, until a point where a mutation occurs that causes the source to switch to another state. The problem is then to find out where the change occurred. We determine the maximum probability of correctly identifying the change point, allowing for collective measurements on the whole sequence of particles emitted by the source. Then, we devise online strategies where the particles are measured individually and an answer is provided as soon as a new particle is received. We show that these online strategies substantially underperform the optimal quantum measurement, indicating that quantum sudden changes, although happening locally, are better detected globally.

  10. Review on solving the forward problem in EEG source analysis

    PubMed Central

    Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace

    2007-01-01

    Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144

  11. A Decision Support Tool to Evaluate Sources and Sinks of Nitrogen within a Watershed Framework

    EPA Science Inventory

    Human transformation of the nitrogen (N) cycle is causing a number of environmental and human health problems. Federal, state and local authorities focusing on management of N loadings face both technical and non-technical challenges. One technical issue is that we need a bette...

  12. Information Theoretic Studies and Assessment of Space Object Identification

    DTIC Science & Technology

    2014-03-24

    localization are contained in Ref. [5]. 1.7.1 A Bayesian MPE Based Analysis of 2D Point-Source-Pair Superresolution In a second recently submitted paper [6], a...related problem of the optical superresolution (OSR) of a pair of equal-brightness point sources separated spatially by a distance (or angle) smaller...1403.4897 [physics.optics] (19 March 2014). 6. S. Prasad, “Asymptotics of Bayesian error probability and 2D pair superresolution ,” submitted to Opt. Express

  13. Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales

    NASA Astrophysics Data System (ADS)

    Madsen, P. T.; Wahlberg, M.

    2007-08-01

    Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.

  14. Methanol poisoning among travellers to Indonesia.

    PubMed

    Giovanetti, Franco

    2013-01-01

    Common Travel Medicine sources generally do not provide information on the risk of methanol poisoning among travellers who visit Indonesia. The aim of this analysis was to increase knowledge on this topic through reports from bibliographic databases and Internet sources. Case reports and studies on methanol poisoning in Indonesia were retrieved through PubMed, Embase and Google Scholar database searching. The Google search was used to retrieve the Web Media articles reporting fatal and non-fatal methanol poisoning in Indonesia, in a timeframe from 01.01.2009 to 03.03.2013. Three case reports of methanol poisoning involving four travellers to Indonesia were found in bibliographic databases. The media sources searching identified 14 articles published online, reporting 22 cases of methanol poisoning among travellers after consumption of local alcohol beverages. The total number of death cases was 18. Some sources report also a large number of cases among the local population. Methanol poisoning is likely to be an emerging public health problem in Indonesia, with an associated morbidity and mortality among travellers and local people. Some strategies can be implemented to prevent or reduce harm among travellers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. High contrast imaging through adaptive transmittance control in the focal plane

    NASA Astrophysics Data System (ADS)

    Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake

    2016-05-01

    High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.

  16. Drinking Water Sodium and Elevated Blood Pressure of Healthy Pregnant Women in Salinity-Affected Coastal Areas.

    PubMed

    Scheelbeek, Pauline F D; Khan, Aneire E; Mojumder, Sontosh; Elliott, Paul; Vineis, Paolo

    2016-08-01

    Coastal areas in Southeast Asia are experiencing high sodium concentrations in drinking water sources that are commonly consumed by local populations. Salinity problems caused by episodic cyclones and subsequent seawater inundations are likely (partly) related to climate change and further exacerbated by changes in upstream river flow and local land-use activities. Dietary (food) sodium plays an important role in the global burden of hypertensive disease. It remains unknown, however, if sodium in drinking water-rather than food-has similar effects on blood pressure and disease risk. In this study, we examined the effect of drinking water sodium on blood pressure of pregnant women: increases in blood pressure in this group could severely affect maternal and fetal health. Data on blood pressure, drinking water source, and personal, lifestyle, and environmental confounders was obtained from 701 normotensive pregnant women residing in coastal Bangladesh. Generalized linear mixed regression models were used to investigate association of systolic and diastolic blood pressure of these-otherwise healthy-women with their water source. After adjustment for confounders, drinkers of tube well and pond water (high saline sources) were found to have significantly higher average systolic (+4.85 and +3.62 mm Hg) and diastolic (+2.30 and +1.72 mm Hg) blood pressures than rainwater drinkers. Drinking water salinity problems are expected to exacerbate in the future, putting millions of coastal people-including pregnant women-at increased risk of hypertension and associated diseases. There is an urgent need to further explore the health risks associated to this understudied environmental health problem and feasibility of possible adaptation strategies. © 2016 American Heart Association, Inc.

  17. Models, Measurements, and Local Decisions: Assessing and ...

    EPA Pesticide Factsheets

    This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  18. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  19. Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.

    PubMed

    Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A

    2016-08-01

    Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.

  20. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  1. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  2. Summary appraisals of the Nation's ground-water resources; South Atlantic Gulf region

    USGS Publications Warehouse

    Cederstrom, D.J.; Boswell, E.H.; Tarver, G.R.

    1979-01-01

    Ground-water problems generally are not severe. Critical situations are restricted to areas where large quantities of ground water are being withdrawn or where aquifers are contaminated by oil-field or industrial waste. Large withdrawals in coastal areas have caused some saltwater intrusion. In other localities, highly mineralized water may have migrated along fault zones to freshwater aquifers. Many of the present problems can be resolved or ameliorated by redistributing withdrawals or developing alternative water sources.

  3. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  4. Local re-acceleration and a modified thick target model of solar flare electrons

    NASA Astrophysics Data System (ADS)

    Brown, J. C.; Turkmani, R.; Kontar, E. P.; MacKinnon, A. L.; Vlahos, L.

    2009-12-01

    Context: The collisional thick target model (CTTM) of solar hard X-ray (HXR) bursts has become an almost “standard model” of flare impulsive phase energy transport and radiation. However, it faces various problems in the light of recent data, particularly the high electron beam density and anisotropy it involves. Aims: We consider how photon yield per electron can be increased, and hence fast electron beam intensity requirements reduced, by local re-acceleration of fast electrons throughout the HXR source itself, after injection. Methods: We show parametrically that, if net re-acceleration rates due to e.g. waves or local current sheet electric (E) fields are a significant fraction of collisional loss rates, electron lifetimes, and hence the net radiative HXR output per electron can be substantially increased over the CTTM values. In this local re-acceleration thick target model (LRTTM) fast electron number requirements and anisotropy are thus reduced. One specific possible scenario involving such re-acceleration is discussed, viz, a current sheet cascade (CSC) in a randomly stressed magnetic loop. Results: Combined MHD and test particle simulations show that local E fields in CSCs can efficiently accelerate electrons in the corona and and re-accelerate them after injection into the chromosphere. In this HXR source scenario, rapid synchronisation and variability of impulsive footpoint emissions can still occur since primary electron acceleration is in the high Alfvén speed corona with fast re-acceleration in chromospheric CSCs. It is also consistent with the energy-dependent time-of-flight delays in HXR features. Conclusions: Including electron re-acceleration in the HXR source allows an LRTTM modification of the CTTM in which beam density and anisotropy are much reduced, and alleviates theoretical problems with the CTTM, while making it more compatible with radio and interplanetary electron numbers. The LRTTM is, however, different in some respects such as spatial distribution of atmospheric heating by fast electrons.

  5. Pacemakers in large arrays of oscillators with nonlocal coupling

    NASA Astrophysics Data System (ADS)

    Jaramillo, Gabriela; Scheel, Arnd

    2016-02-01

    We model pacemaker effects of an algebraically localized heterogeneity in a 1 dimensional array of oscillators with nonlocal coupling. We assume the oscillators obey simple phase dynamics and that the array is large enough so that it can be approximated by a continuous nonlocal evolution equation. We concentrate on the case of heterogeneities with positive average and show that steady solutions to the nonlocal problem exist. In particular, we show that these heterogeneities act as a wave source. This effect is not possible in 3 dimensional systems, such as the complex Ginzburg-Landau equation, where the wavenumber of weak sources decays at infinity. To obtain our results we use a series of isomorphisms to relate the nonlocal problem to the viscous eikonal equation. We then use Fredholm properties of the Laplace operator in Kondratiev spaces to obtain solutions to the eikonal equation, and by extension to the nonlocal problem.

  6. Precision time distribution within a deep space communications complex

    NASA Technical Reports Server (NTRS)

    Curtright, J. B.

    1972-01-01

    The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.

  7. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  8. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  9. Electrophysiological correlates of cocktail-party listening.

    PubMed

    Lewald, Jörg; Getzmann, Stephan

    2015-10-01

    Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. What State Legislators Think about School Finance. An Opinion Survey of State Legislature Education Committee Chairman.

    ERIC Educational Resources Information Center

    Falcon, James C.

    The attitudes of State legislative education committee chairmen concerning possible changes in the financing and governance of education were surveyed. The chairmen provided comments on the deficiencies in federal, State, and local revenue sources; discussed problems of governance; gave their opinions on educational innovations and program…

  11. Drug Impact Index.

    ERIC Educational Resources Information Center

    Western Center for Drug-Free Schools and Communities.

    The Drug Impact Index provides a set of indicators designed to determine the extent of the local drug problem in a community. Each indicator includes a technical note on the data sources, a graph showing comparative statistics on that indicator for the Portland area and for the State of Oregon, and brief remarks on the implications of the data.…

  12. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  13. Folklore and traditional ecological knowledge of geckos in Southern Portugal: implications for conservation and science.

    PubMed

    Ceríaco, Luis M P; Marques, Mariana P; Madeira, Natália C; Vila-Viçosa, Carlos M; Mendes, Paula

    2011-09-05

    Traditional Ecological Knowledge (TEK) and folklore are repositories of large amounts of information about the natural world. Ideas, perceptions and empirical data held by human communities regarding local species are important sources which enable new scientific discoveries to be made, as well as offering the potential to solve a number of conservation problems. We documented the gecko-related folklore and TEK of the people of southern Portugal, with the particular aim of understanding the main ideas relating to gecko biology and ecology. Our results suggest that local knowledge of gecko ecology and biology is both accurate and relevant. As a result of information provided by local inhabitants, knowledge of the current geographic distribution of Hemidactylus turcicus was expanded, with its presence reported in nine new locations. It was also discovered that locals still have some misconceptions of geckos as poisonous and carriers of dermatological diseases. The presence of these ideas has led the population to a fear of and aversion to geckos, resulting in direct persecution being one of the major conservation problems facing these animals. It is essential, from both a scientific and conservationist perspective, to understand the knowledge and perceptions that people have towards the animals, since, only then, may hitherto unrecognized pertinent information and conservation problems be detected and resolved.

  14. Folklore and traditional ecological knowledge of geckos in Southern Portugal: implications for conservation and science

    PubMed Central

    2011-01-01

    Traditional Ecological Knowledge (TEK) and folklore are repositories of large amounts of information about the natural world. Ideas, perceptions and empirical data held by human communities regarding local species are important sources which enable new scientific discoveries to be made, as well as offering the potential to solve a number of conservation problems. We documented the gecko-related folklore and TEK of the people of southern Portugal, with the particular aim of understanding the main ideas relating to gecko biology and ecology. Our results suggest that local knowledge of gecko ecology and biology is both accurate and relevant. As a result of information provided by local inhabitants, knowledge of the current geographic distribution of Hemidactylus turcicus was expanded, with its presence reported in nine new locations. It was also discovered that locals still have some misconceptions of geckos as poisonous and carriers of dermatological diseases. The presence of these ideas has led the population to a fear of and aversion to geckos, resulting in direct persecution being one of the major conservation problems facing these animals. It is essential, from both a scientific and conservationist perspective, to understand the knowledge and perceptions that people have towards the animals, since, only then, may hitherto unrecognized pertinent information and conservation problems be detected and resolved. PMID:21892925

  15. The proton and helium anomalies in the light of the Myriad model

    NASA Astrophysics Data System (ADS)

    Salati, Pierre; Génolini, Yoann; Serpico, Pasquale; Taillet, Richard

    2017-03-01

    A hardening of the proton and helium fluxes is observed above a few hundreds of GeV/nuc. The distribution of local sources of primary cosmic rays has been suggested as a potential solution to this puzzling behavior. Some authors even claim that a single source is responsible for the observed anomalies. But how probable these explanations are? To answer that question, our current description of cosmic ray Galactic propagation needs to be replaced by the Myriad model. In the former approach, sources of protons and helium nuclei are treated as a jelly continuously spread over space and time. A more accurate description is provided by the Myriad model where sources are considered as point-like events. This leads to a probabilistic derivation of the fluxes of primary species, and opens the possibility that larger-than-average values may be observed at the Earth. For a long time though, a major obstacle has been the infinite variance associated to the probability distribution function which the fluxes follow. Several suggestions have been made to cure this problem but none is entirely satisfactory. We go a step further here and solve the infinite variance problem of the Myriad model by making use of the generalized central limit theorem. We find that primary fluxes are distributed according to a stable law with heavy tail, well-known to financial analysts. The probability that the proton and helium anomalies are sourced by local SNR can then be calculated. The p-values associated to the CREAM measurements turn out to be small, unless somewhat unrealistic propagation parameters are assumed.

  16. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  17. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  18. Locating the source of diffusion in complex networks by time-reversal backward spreading.

    PubMed

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  19. Locating the source of diffusion in complex networks by time-reversal backward spreading

    NASA Astrophysics Data System (ADS)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  20. An evolving problem: methamphetamine production and trafficking in the United States.

    PubMed

    Shukla, Rashi K; Crump, Jordan L; Chrisco, Emelia S

    2012-11-01

    Methamphetamine is a serious illicit drug problem in the United States and globally. For decades, methamphetamine has been supplied to the illicit market through local clandestine manufacturing and trafficking. In the early stages, illicit methamphetamine was produced and trafficked by motorcycle gangs and Mexican criminal groups. Over time, local clandestine manufacturing increasingly contributed to the illicit supply and broader methamphetamine problem. This review examines the evolution of the illicit methamphetamine supply in the U.S. A review of the literature on methamphetamine production and trafficking was conducted. Information was obtained from numerous sources including governmental reports, books and academic articles. Attempts to control the supply of methamphetamine have only led to short term disruptions in availability. Clandestine manufacturing and trafficking have undergone significant changes over the past several decades. Shifts in local production have regularly been counterbalanced by changes in production and trafficking from criminal organizations in Mexico. Transnational criminal organizations now control much of the methamphetamine supply in the U.S. and methamphetamine remains widely available. The supply of methamphetamine in the United States is dynamic. Producers and traffickers have adapted to control efforts and the problem continues. Control efforts focused on eliminating supply are limited at best. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. The study of the application of crystalline silicone solar cell type for a temporary flood camp

    NASA Astrophysics Data System (ADS)

    Hendarti, R.; Katarina, W.; Wangidjaja, W.

    2017-12-01

    During flood period, most of temporary evacuation shelters in Jakarta are lack in electricity because the local electricity company turned the electricity off to avoid any electrical problem because of the high water level over the flooded area. Whereas, the local electricity or the grid is the main energy source for the lighting and water pump machine, therefore the energy source becomes a significant issue during this period. Currently, the local government has already provided diesel generators to substitute the local grid when it is off, however, the amount of the generators is still limited. This study, therefore, investigated an alternative energy for the electricity, particularly solar energy and this paper presents an analysis of the Jakarta duration of sunshine during rainy seasons in order to investigate which Crystalline Silicone solar cell type that can be implemented optimally for energy supply in the period of flood evacuation as well as for the shelter. A deep analysis on literature review was conducted on the three types of Crystalline Silicone solar cell, Jakarta local weather. Furthermore, the standard of International Federation of Red Cross and Red Crescent Societies (IFRC) was also studied for the shelter design. The results of this study could be used as a reference for the local authority in providing the substitute energy supply in the temporary evacuation area during flood period in which the solar energy source could be also attached on the shelter.

  2. Hiding the Source Based on Limited Flooding for Sensor Networks.

    PubMed

    Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing

    2015-11-17

    Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  3. Modelling and approaching pragmatic interoperability of distributed geoscience data

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang

    2010-05-01

    Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location, intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.

  4. [Gonorrheal proctitis imitating proctalgia fugax].

    PubMed

    Nechvátal, A; Masek, T; Hoch, J; Hercogová, J

    2004-01-01

    Proctalgia fugax is usually a source of many diagnostic and therapeutic problems. It is often very difficult to find the cause of the pain. Case-report of a 27-year-old patient who was examined by surgeons on cramp-like pain localized to the rectum. The careful history and laboratory examination confirmed gonorrheal proctitis. She was then successfully treated with ceftriaxon.

  5. Collective Impact Approach: A "Tool" for Managing Complex Problems and Business Clusters Sustainability

    ERIC Educational Resources Information Center

    De Chiara, Alessandra

    2017-01-01

    Environmental pollution occurring in industrial districts represents a serious issue not only for local communities but also for those industrial productions that draw from the territory the source of their competitiveness. Due to its ability to take into account the needs of different stakeholders, the collective impact approach has the potential…

  6. Applying local binary patterns in image clustering problems

    NASA Astrophysics Data System (ADS)

    Skorokhod, Nikolai N.; Elizarov, Alexey I.

    2017-11-01

    Due to the fact that the cloudiness plays a critical role in the Earth radiative balance, the study of the distribution of different types of clouds and their movements is relevant. The main sources of such information are artificial satellites that provide data in the form of images. The most commonly used method of solving tasks of processing and classification of images of clouds is based on the description of texture features. The use of a set of local binary patterns is proposed to describe the texture image.

  7. Quantifying sources of elemental carbon over the Guanzhong Basin of China: A consistent network of measurements and WRF-Chem modeling.

    PubMed

    Li, Nan; He, Qingyang; Tie, Xuexi; Cao, Junji; Liu, Suixin; Wang, Qiyuan; Li, Guohui; Huang, Rujin; Zhang, Qiang

    2016-07-01

    We conducted a year-long WRF-Chem (Weather Research and Forecasting Chemical) model simulation of elemental carbon (EC) aerosol and compared the modeling results to the surface EC measurements in the Guanzhong (GZ) Basin of China. The main goals of this study were to quantify the individual contributions of different EC sources to EC pollution, and to find the major cause of the EC pollution in this region. The EC measurements were simultaneously conducted at 10 urban, rural, and background sites over the GZ Basin from May 2013 to April 2014, and provided a good base against which to evaluate model simulation. The model evaluation showed that the calculated annual mean EC concentration was 5.1 μgC m(-3), which was consistent with the observed value of 5.3 μgC m(-3). Moreover, the model result also reproduced the magnitude of measured EC in all seasons (regression slope = 0.98-1.03), as well as the spatial and temporal variations (r = 0.55-0.78). We conducted several sensitivity studies to quantify the individual contributions of EC sources to EC pollution. The sensitivity simulations showed that the local and outside sources contributed about 60% and 40% to the annual mean EC concentration, respectively, implying that local sources were the major EC pollution contributors in the GZ Basin. Among the local sources, residential sources contributed the most, followed by industry and transportation sources. A further analysis suggested that a 50% reduction of industry or transportation emissions only caused a 6% decrease in the annual mean EC concentration, while a 50% reduction of residential emissions reduced the winter surface EC concentration by up to 25%. In respect to the serious air pollution problems (including EC pollution) in the GZ Basin, our findings can provide an insightful view on local air pollution control strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, J.; Ni, S.; Chu, R.; Xia, Y.

    2017-12-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.

  9. Constraining the Mass of the Local Group through Proper Motion Measurements of Local Group Galaxies

    NASA Astrophysics Data System (ADS)

    Sohn, S. Tony; van der Marel, R.; Anderson, J.

    2012-01-01

    The Local Group and its two dominant spiral galaxies have been the benchmark for testing many aspects of cosmological and galaxy formation theories. This includes, e.g., dark halo profiles and shapes, substructure and the "missing satellite" problem, and the minimum mass for galaxy formation. But despite the extensive work in all of these areas, our knowledge of the mass of the Milky Way and M31, and thus the total mass of the Local Group remains one of the most poorly established astronomical parameters (uncertain by a factor of 4). One important reason for this problem is the lack of information in tangential motions of galaxies, which can be only obtained through proper motion measurements. In this study, we introduce our projects for measuring absolute proper motions of (1) the dwarf spheroidal galaxy Leo I, (2) M31, and (3) the 4 dwarf galaxies near the edge of the Local Group (Cetus, Leo A, Tucana, and Sag DIG). Results from these three independent measurements will provide important clues to the mass of the Milky Way, M31, and the Local Group as a whole, respectively. We also present our proper motion measurement technique that uses compact background galaxies as astrometric reference sources.

  10. Intercontinental Transport of Aerosols: Implication for Regional Air Quality

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Diehl, Thomas; Ginoux, Paul

    2006-01-01

    Aerosol particles, also known as PM2.5 (particle diameter less than 2.5 microns) and PM10 (particle diameter less than 10 microns), is one of the key atmospheric components that determine ambient air quality. Current US air quality standards for PM10 (particles with diameter < 10 microns) and PM2.5 (particles with diameter 2.5 microns) are 50 pg/cu m and 15 pg/cu m, respectively. While local and regional emission sources are the main cause of air pollution problems, aerosols can be transported on a hemispheric or global scale. In this study, we use the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model to quantify contributions of long-range transport vs. local/regional pollution sources and from natural vs. anthropogenic sources to PM concentrations different regions. In particular, we estimate the hemispheric impact of anthropogenic sulfate aerosols and dust from major source areas on other regions in the world. The GOCART model results are compared with satellite remote sensing and ground-based network measurements of aerosol optical depth and concentrations.

  11. Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems

    NASA Astrophysics Data System (ADS)

    Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel

    The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.

  12. Drainage network optimization for inundation mitigation case study of ITS Surabaya

    NASA Astrophysics Data System (ADS)

    Savitri, Yang Ratri; Lasminto, Umboro

    2017-06-01

    Institut Teknologi Sepuluh Nopember (ITS) Surabaya is one of engineering campus in Surabaya with an area of ± 187 ha, which consists of building and campus facilities. The campus is supported by drainage system planned according to the ITS Master Plan on 2002. The drainage system is planned with numbers of retention and detention pond based on the city concept of Zero Delta Q concept. However, in the rainy season, it frequently has inundation problems in several locations. The problems could be identified from two major sources, namely the internal campus facilities and external condition connected with the city drainage system. This paper described the capabilities of drainage network optimization to mitigate local urban drainage problem. The hydrology-hydraulic investigation was done by utilizing the Storm Water Management Model (SWMM) developed by US Environmental Protection Agency (EPA). The mitigation is based on several alternative that based on the existing condition and regarding the social problem. The study results showed that the management of the flow from external source could reduce final stored volume of the campus main channel by 31.75 %.

  13. MUSIC for localization of thunderstorm cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Lewis, P.S.; Rynne, T.M.

    1993-12-31

    Lightning represents an event detectable optically, electrically, and acoustically, and several systems are already in place to monitor such activity. Unfortunately, such detection of lightning can occur too late, since operations need to be protected in advance of the first lightning strike. Additionally, the bolt itself can traverse several kilometers before striking the ground, leaving a large region of uncertainty as to the center of the storm and its possible strike regions. NASA Kennedy Space Center has in place an array of electric field mills that monitor the (effectively) DC electric field. Prior to the first lightning strike, the surfacemore » electric fields rise as the storm generator within a thundercloud begins charging. Extending methods we developed for an analogous source localization problem in mangnetoencephalography, we present Cramer-Rao lower bounds and MUSIC scans for fitting a point-charge source model to the electric field mill data. Such techniques can allow for the identification and localization of charge centers in cloud structures.« less

  14. Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays

    NASA Astrophysics Data System (ADS)

    Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.

    2018-03-01

    Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.

  15. Small gas-turbine units for the power industry: Ways for improving the efficiency and the scale of implementation

    NASA Astrophysics Data System (ADS)

    Kosoi, A. S.; Popel', O. S.; Beschastnykh, V. N.; Zeigarnik, Yu. A.; Sinkevich, M. V.

    2017-10-01

    Small power units (<1 MW) see increasing application due to enhanced growth of the distributed power generation and smart power supply systems. They are usually used for feeding facilities whose connection to centralized networks involves certain problems of engineering or economical nature. Small power generation is based on a wide range of processes and primary sources, including renewable and local ones, such as nonconventional hydrocarbon fuel comprising associated gas, biogas, coalmine methane, etc. Characteristics of small gas-turbine units (GTU) that are most widely available on the world market are reviewed. The most promising lines for the development of the new generation of small GTUs are examined. Special emphasis is placed on the three lines selected for improving the efficiency of small GTUs: increasing the fuel efficiency, cutting down the maintenance cost, and integration with local or renewable power sources. It is demonstrated that, as to the specific fuel consumption, small GTUs of the new generation can have an efficiency 20-25% higher than those of the previous generation, require no maintenance between overhauls, and can be capable of efficient integration into intelligent electrical networks with power facilities operating on renewable or local power sources.

  16. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  17. Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.

    PubMed

    Chen, Xin; Liu, Zhen; Wei, Xizhang

    2017-05-11

    Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.

  18. Towards 3D Noise Source Localization using Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.

    2017-12-01

    The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.

  19. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  20. Waves on the Free Surface Described by Linearized Equations of Hydrodynamics with Localized Right-Hand Sides

    NASA Astrophysics Data System (ADS)

    Dobrokhotov, S. Yu.; Nazaikinskii, V. E.

    2018-01-01

    A linearized system of equations of hydrodynamics with time-dependent spatially localized right-hand side placed both on the free surface (and on the bottom of the basin) and also in the layer of the liquid is considered in a layer of variable depth with a given basic plane-parallel flow. A method of constructing asymptotic solutions of this problem is suggested; it consists of two stages: (1) a reduction of the three-dimensional problem to a two-dimensional inhomogeneous pseudodifferential equation on the nonperturbed free surface of the liquid, (2) a representation of the localized right-hand side in the form of a Maslov canonical operator on a special Lagrangian manifold and the subsequent application of a generalization to evolution problems of an approach, which was recently suggested in the paper [A. Yu. Anikin, S. Yu. Dobrokhotov, V. E. Nazaikinskii, and M. Rouleux, Dokl. Ross. Akad. Nauk 475 (6), 624-628 (2017); Engl. transl.: Dokl. Math. 96 (1), 406-410 (2017)], to solving stationary problems with localized right-hand sides and its combination with "nonstandard" characteristics. A method of calculation (generalizing long-standing results of Dobrokhotov and Zhevandrov) of an analog of the Kelvin wedge and the wave fields inside the wedge and in its neighborhood is suggested, which uses the consideration that this method is the projection to the extended configuration space of a Lagrangian manifold formed by the trajectories of the Hamiltonian vector field issuing from the intersection of the set of zeros of the extended Hamiltonian of the problem with conormal bundle to the graph of the vector function defining the trajectory of motion of an equivalent source on the surface of the liquid.

  1. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  2. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  3. Odometry and Laser Scanner Fusion Based on a Discrete Extended Kalman Filter for Robotic Platooning Guidance

    PubMed Central

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079

  4. Odometry and laser scanner fusion based on a discrete extended Kalman Filter for robotic platooning guidance.

    PubMed

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.

  5. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    NASA Astrophysics Data System (ADS)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  6. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  7. Reynolds stress of localized toroidal modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.Z.; Mahajan, S.M.

    1995-02-01

    An investigation of the 2D toroidal eigenmode problem reveals the possibility of a new consistent 2D structure, the dissipative BM-II mode. In contrast to the conventional ballooning mode, the new mode is poloidally localized at {pi}/2 (or -{pi}/2), and possesses significant radial asymmetry. The radial asymmetry, in turn, allows the dissipative BM-II to generate considerably larger Reynolds stress as compared to the standard slab drift type modes. It is also shown that a wide class of localized dissipative toroidal modes are likely to be of the dissipative BM-II nature, suggesting that at the tokamak edge, the fluctuation generated Reynolds stressmore » (a possible source of poloidal flow) can be significant.« less

  8. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  9. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  10. Geometric k-nearest neighbor estimation of entropy and mutual information

    NASA Astrophysics Data System (ADS)

    Lord, Warren M.; Sun, Jie; Bollt, Erik M.

    2018-03-01

    Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.

  11. What gets measured gets done: an assessment of local data uses and needs in large urban health departments.

    PubMed

    Castrucci, Brian C; Rhoades, Elizabeth K; Leider, Jonathon P; Hearne, Shelley

    2015-01-01

    The epidemiologic shift in the leading causes of mortality from infectious disease to chronic disease has created significant challenges for public health surveillance at the local level. We describe how the largest US city health departments identify and use data to inform their work and we identify the data and information that local public health leaders have specified as being necessary to help better address specific problems in their communities. We used a mixed-methods design that included key informant interviews, as well as a smaller embedded survey to quantify organizational characteristics related to data capacity. Interview data were independently coded and analyzed for major themes around data needs, barriers, and achievements. Forty-five public health leaders from each of 3 specific positions-local health official, chief of policy, and chief science or medical officer-in 16 large urban health departments. Public health leaders in large urban local health departments reported that timely data and data on chronic disease that are available at smaller geographical units are difficult to obtain without additional resources. Despite departments' successes in creating ad hoc sources of local data to effect policy change, all participants described the need for more timely data that could be geocoded at a neighborhood or census tract level to more effectively target their resources. Electronic health records, claims data, and hospital discharge data were identified as sources of data that could be used to augment the data currently available to local public health leaders. Monitoring the status of community health indicators and using the information to identify priority issues are core functions of all public health departments. Public health professionals must have access to timely "hyperlocal" data to detect trends, allocate resources to areas of greatest priority, and measure the effectiveness of interventions. Although innovations in the largest local health departments in large urban areas have established some methods to obtain local data on chronic disease, leaders recognize that there is an urgent need for more timely and more geographically specific data at the neighborhood or census tract level to efficiently and effectively address the most pressing problems in public health.

  12. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  13. Problems and Alternatives in Capital Financing for Minnesota Elementary and Secondary Schools.

    ERIC Educational Resources Information Center

    Hopeman, Alan R.

    The primary sources of capital funds in Minnesota are the local capital expenditure levy and school district bond sales. The state provides assistance to low-wealth districts by providing a capital expenditure equalization aid program and two types of loans under the Maximum Effort School Aid Law. It has been argued that the concepts of equal…

  14. Pinpointing Watershed Pollution on a Virtual Globe

    ERIC Educational Resources Information Center

    Saunders, Cheston; Taylor, Amy

    2014-01-01

    Pollution is not a problem we just read about anymore. It affects the air we breathe, the land we live on, and the water we consume. After noticing a lack of awareness in students, a lesson was developed that used Google Earth to pinpoint sources of pollution in the local area and in others across the country, and their effects on the surrounding…

  15. Integrated scheduling and resource management. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Ward, M. T.

    1987-01-01

    This paper examines the problem of integrated scheduling during the Space Station era. Scheduling for Space Station entails coordinating the support of many distributed users who are sharing common resources and pursuing individual and sometimes conflicting objectives. This paper compares the scheduling integration problems of current missions with those anticipated for the Space Station era. It examines the facilities and the proposed operations environment for Space Station. It concludes that the pattern of interdependecies among the users and facilities, which are the source of the integration problem is well structured, allowing a dividing of the larger problem into smaller problems. It proposes an architecture to support integrated scheduling by scheduling efficiently at local facilities as a function of dependencies with other facilities of the program. A prototype is described that is being developed to demonstrate this integration concept.

  16. Village-Level Identification of Nitrate Sources: Collaboration of Experts and Local Population in Benin, Africa

    NASA Astrophysics Data System (ADS)

    Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.

    2005-12-01

    Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.

  17. Participatory health impact assessment for the development of local government regulation on hazard control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inmuong, Uraiwan, E-mail: uraiwan@kku.ac.t; Faculty of Public Health, Khon Kaen University, Thailand 123 Mittrapharb Road, Khon Kaen 40002; Rithmak, Panee, E-mail: panrit@kku.ac.t

    The Thai Public Health Act 1992 required the Thai local governments to issue respective regulations to take control of any possible health-hazard related activities, both from commercial and noncommercial sources. Since 1999, there has been centrally decentralized of power to a new form of local government establishment, namely Sub-district Administrative Organization (SAO). The SAO is asmall-scale local governing structure while its legitimate function is for community services, including control of health impact related activities. Most elected SAO administrators and officers are new and less experience with any of public health code of practice, particularly on health-hazard control. This action researchmore » attempted to introduce and apply a participatory health impact assessment (HIA) tool for the development of SAO health-hazard control regulation. The study sites were at Ban Meang and Kok See SAOs, Khon Kaen Province, Thailand, while all intervention activities conducted during May 2005-April 2006. A set of cooperative activities between researchers and community representatives were planned and organized by; surveying and identifying place and service base locally causing local environmental health problems, organizing community participatory workshops for drafting and proposing the health-hazard control regulation, and appropriate practices for health-hazard controlling measures. This action research eventually could successfully enable the SAO administrators and officers understanding of local environmental-related health problem, as well as development of imposed health-hazard control regulation for local community.« less

  18. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  19. Locating multiple diffusion sources in time varying networks from sparse observations.

    PubMed

    Hu, Zhao-Long; Shen, Zhesi; Cao, Shinan; Podobnik, Boris; Yang, Huijie; Wang, Wen-Xu; Lai, Ying-Cheng

    2018-02-08

    Data based source localization in complex networks has a broad range of applications. Despite recent progress, locating multiple diffusion sources in time varying networks remains to be an outstanding problem. Bridging structural observability and sparse signal reconstruction theories, we develop a general framework to locate diffusion sources in time varying networks based solely on sparse data from a small set of messenger nodes. A general finding is that large degree nodes produce more valuable information than small degree nodes, a result that contrasts that for static networks. Choosing large degree nodes as the messengers, we find that sparse observations from a few such nodes are often sufficient for any number of diffusion sources to be located for a variety of model and empirical networks. Counterintuitively, sources in more rapidly varying networks can be identified more readily with fewer required messenger nodes.

  20. Getting it Right? Lessons from the Interwar Years on Pulmonary Tuberculosis Control in England and Wales

    PubMed Central

    Bowden, Sue; Sadler, Alex

    2015-01-01

    This paper examines morbidity and mortality patterns in interwar England and Wales, using previously under-explored primary archival source materials. These materials help us understand not only what local authorities could and did do, but also the reasons for the marked variations in the ability of different authorities to manage the problem. We identify where and why there were problems and also how and why some authorities were more successful than others in dealing with the disease. Wealth was not an issue. We find a combination of pro-active preventative measures was significant. PMID:25498440

  1. Local extinction of dragonfly and damselfly populations in low- and high-quality habitat patches.

    PubMed

    Suhonen, Jukka; Hilli-Lukkarinen, Milla; Korkeamäki, Esa; Kuitunen, Markku; Kullas, Johanna; Penttinen, Jouni; Salmela, Jukka

    2010-08-01

    Understanding the risk of extinction of a single population is an important problem in both theoretical and applied ecology. Local extinction risk depends on several factors, including population size, demographic or environmental stochasticity, natural catastrophe, or the loss of genetic diversity. The probability of local extinction may also be higher in low-quality sink habitats than in high-quality source habitats. We tested this hypothesis by comparing local extinction rates of 15 species of Odonata (dragonflies and damselflies) between 1930-1975 and 1995-2003 in central Finland. Local extinction rates were higher in low-quality than in high-quality habitats. Nevertheless, for the three most common species there were no differences in extinction rates between low- and high-quality habitats. Our results suggest that a good understanding of habitat quality is crucial for the conservation of species in heterogeneous landscapes.

  2. Lunar occultations for gamma-ray source measurements

    NASA Technical Reports Server (NTRS)

    Koch, David G.; Hughes, E. B.; Nolan, Patrick L.

    1990-01-01

    The unambiguous association of discrete gamma-ray sources with objects radiating at other wavelengths, the separation of discrete sources from the extended emission within the Galaxy, the mapping of gamma-ray emission from nearby galaxies and the measurement of structure within a discrete source cannot presently be accomplished at gamma-ray energies. In the past, the detection processes used in high-energy gamma-ray astronomy have not allowed for good angular resolution. This problem can be overcome by placing gamma-ray detectors on the moon and using the horizon as an occulting edge to achieve arcsec resolution. For purposes of discussion, this concept is examined for gamma rays above 100 MeV for which pair production dominates the detection process and locally-generated nuclear gamma rays do not contribute to the background.

  3. Breaking the mould without breaking the system: the development and pilot of a clinical dashboard at The Prince Charles Hospital.

    PubMed

    Clark, Kevin W; Whiting, Elizabeth; Rowland, Jeffrey; Thompson, Leah E; Missenden, Ian; Schellein, Gerhard

    2013-06-01

    There is a vast array of clinical and quality data available within healthcare organisations. The availability of this data in a timely and easy to visualise way is an essential component of high-performing healthcare teams. It is recognised that good quality information is a driver of performance for clinical teams and helps ensure best possible care for patients. In 2012 the Internal Medicine Program at The Prince Charles Hospital developed a clinical dashboard that displays locally relevant information alongside relevant hospital and statewide metrics that inform daily clinical decision making. The data reported on the clinical dashboard is driven from data sourced from the electronic patient journey board in real time as well as other Queensland Health data sources. This provides clinicians with easy access to a wealth of local unit data presented in a simple graphical format that is being captured locally and arranged on a single screen so the information can be monitored at a glance. Local unit data informs daily decisions that identify and confirm patient flow problems, assist to identify root causes and enable evaluation of patient flow solutions.

  4. Gas-Solid Dynamics at Disordered and Adsorbate Covered Surfaces

    DTIC Science & Technology

    1992-09-02

    interesting physical problems in which non-linear reactions occur at localized defects. The Lotka - Volterra system is considered, in which the source, sink...designing external optical fields for manipulating molecular scale events. A general formulation of the theory was developed, for treating rotational...interrelated avenues of study were pursued. The goals of the research were achieved, thereby producing a general theoretical framework for both optimal

  5. Galactic cosmic ray composition

    NASA Technical Reports Server (NTRS)

    Meyer, J. P.

    1986-01-01

    An assessment is given of the galactic cosmic ray source (GCRS) elemental composition and its correlation with first ionization potential. The isotopic composition of heavy nuclei; spallation cross sections; energy spectra of primary nuclei; electrons; positrons; local galactic reference abundances; comparison of solar energetic particles and solar coronal compositions; the hydrogen; lead; nitrogen; helium; and germanium deficiency problems; and the excess of elements are among the topics covered.

  6. Passive electrical monitoring and localization of fluid leakages from wells

    NASA Astrophysics Data System (ADS)

    Revil, A.; Mao, D.; Haas, A. K.; Karaoulis, M.; Frash, L.

    2015-02-01

    Electrokinetic phenomena are a class of cross-coupling phenomena involving the relative displacement between the pore water (together with the electrical diffuse layer) with respect to the solid phase of a porous material. We demonstrate that electrical fields of electrokinetic nature can be associated with fluid leakages from wells. These leakages can be remotely monitored and the resulting signals used to localize their causative source distribution both in the laboratory and in field conditions. The first laboratory experiment (Experiment #1) shows how these electrical fields can be recorded at the surface of a cement block during the leakage of a brine from a well. The measurements were performed with a research-grade medical electroencephalograph and were inverted using a genetic algorithm to localize the causative source of electrical current and therefore, localize the leak in the block. Two snapshots of electrical signals were used to show how the leak evolved over time. The second experiment (Experiment #2) was performed to see if we could localize a pulse water injection from a shallow well in field conditions in the case of a heterogeneous subsurface. We used the same equipment as in Experiment #1 and processed the data with a trend removal algorithm, picking the amplitude from 24 receiver channels just after the water injection. The amplitude of the electric signals changed from the background level indicating that a volume of water was indeed flowing inside the well into the surrounding soil and then along the well. We used a least-square inversion algorithm to invert a snapshot of the electrical potential data at the injection time to localize the source of the self-potential signals. The inversion results show positive potential anomalies in the vicinity of the well. For both experiments, forward numerical simulations of the problem using a finite element package were performed in order to assess the underlying physics of the causative source of the observed electrical potential anomalies and how they are related to the flow of the water phase.

  7. Arcsec source location measurements in gamma-ray astronomy from a lunar observatory

    NASA Astrophysics Data System (ADS)

    Koch, D. G.; Hughes, B. E.

    1990-03-01

    The physical processes typically used in the detection of high energy gamma-rays do not permit good angular resolution, which makes difficult the unambiguous association of discrete gamma-ray sources with objects emitting at other wavelengths. This problem can be overcome by placing gamma-ray detectors on the moon and using the horizon as an occulting edge to achieve arcsec resolution. For the purpose of discussion, this concept is examined for gamma rays above about 20 MeV for which pair production dominates the detection process and locally-generated nuclear gamma rays do not contribute to the background.

  8. Local people's attitudes towards conservation and wildlife tourism around Sariska Tiger Reserve, India.

    PubMed

    Udaya Sekhar, Nagothu

    2003-12-01

    Conservationists in the recent years view local peoples' support for protected areas management as an important element of biodiversity conservation. This is often linked to the direct benefits, which local communities get from the protected areas. These benefits could be in the form of biomass resources, park funds diverted to local villages by state agencies and revenue from wildlife tourism. There are a very few studies which have attempted to study the direct relationship between benefits from wildlife tourism and local support for conservation. In India, wildlife tourism is restricted, and mostly controlled by state and private agencies. Wildlife conservation policy does not view tourism in protected areas as a source of revenue for the local communities. The present study examines the local people's attitudes towards wildlife tourism and the impact of benefits from tourism on the local support for Sariska Tiger Reserve (STR), India. STR is a flagship for tourism where protected areas are increasingly being visited and where local support for wildlife tourism has not been studied adequately. Results indicate that two-thirds of the respondents were positive towards tourism and support for conservation. The respondents were aware that more tourism benefits are possible from a well-conserved protected area. There appears to be correlation between benefits obtained by local people from wildlife tourism and other sources, and support for protected area existence, suggesting that benefits impact people's attitudes towards conservation. Some of the main problems are the unequal distribution of tourism benefits, lack of locals' involvement in tourism and development. There is a need to clearly address these issues, so that protected areas may get the support of local people, which may lead to sustainable development.

  9. Seasonal water demand in Benin's agriculture.

    PubMed

    Gruber, Ina; Kloos, Julia; Schopp, Marion

    2009-01-01

    This paper describes and analyzes agricultural water demands for Benin, West Africa. Official statistical data regarding water quantities as well as knowledge on factors influencing the demand for water are extremely rare and often reveal national trends without considering regional or local differences. Thus policy makers usually work with this estimated and aggregated data, which make it very difficult to adequately address regional and local development goals. In the framework of an interdisciplinary analysis the following paper provides insight into water quantification and detects water problems under seasonal aspects for agriculture according to regional differences. Following the definition of the Food and Agriculture Organization [FAO, 1995. Water Report 7. Irrigation in Africa in Figures. Rome] agriculture is divided into irrigation and livestock watering, which were analyzed using different field methods. The study reveals that although water supply in absolute terms seems to be sufficient in Benin, seasonal water problems occur both in irrigation and in livestock management. Thus arising seasonal water problems are not the consequence of general water scarcity but more linked to three major problems. These problems emerge from difficulties in technical equipment and financial means of farmers, from the specific local conditions influencing the access to water sources and the extraction of groundwater, and third from the overall low organizational structure of water management. Therefore regional differences as well as a general improvement of knowledge on better management structures, technical know how, and access to credits for farmers need to be considered in national strategies in order to improve the agricultural water usage in Benin.

  10. Local censuses in the 18th century.

    PubMed

    Law, C M

    1969-03-01

    Abstract Recent work on the population problems of the eighteenth century has been mainly based on the use of parish records. Another source, and one which, surprisingly, has received little attention is the local census. These are more numerous than is generally realised; and can be of great use in demographic studies. This paper examines 125 local censuses mainly taken in urban areas. They are discussed in terms of how they come to be taken, their reliability, extant manuscript material and their contents. Whilst most of the censuses confine themselves to the basic facts such as total population, number of houses and number of families, some give details of sex, age, marital status and occupation. Generally the information is given for the parish or local administrative unit, but in a few instances it is available by streets.

  11. Our environment, our health: a community-based participatory environmental health survey in Richmond, California.

    PubMed

    Cohen, Alison; Lopez, Andrea; Malloy, Nile; Morello-Frosch, Rachel

    2012-04-01

    This study presents a health survey conducted by a community-based participatory research partnership between academic researchers and community organizers to consider environmental health and environmental justice issues in four neighborhoods of Richmond, California, a low-income community of color living along the fence line of a major oil refinery and near other industrial and mobile sources of pollution. The Richmond health survey aimed to assess local concerns and perceptions of neighborhood conditions, health problems, mobile and stationary hazards, access to health care, and other issues affecting residents of Richmond. Although respondents thought their neighborhoods were good places to live, they expressed concerns about neighborhood stressors and particular sources of pollution, and identified elevated asthma rates for children and long-time Richmond residents. The Richmond health survey offers a holistic, community-centered perspective to understanding local environmental health issues, and can inform future environmental health research and organizing efforts for community-university collaboratives.

  12. Source localization of brain activity using helium-free interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dammers, Jürgen, E-mail: J.Dammers@fz-juelich.de; Chocholacs, Harald; Eich, Eberhard

    2014-05-26

    To detect extremely small magnetic fields generated by the human brain, currently all commercial magnetoencephalography (MEG) systems are equipped with low-temperature (low-T{sub c}) superconducting quantum interference device (SQUID) sensors that use liquid helium for cooling. The limited and increasingly expensive supply of helium, which has seen dramatic price increases recently, has become a real problem for such systems and the situation shows no signs of abating. MEG research in the long run is now endangered. In this study, we report a MEG source localization utilizing a single, highly sensitive SQUID cooled with liquid nitrogen only. Our findings confirm that localizationmore » of neuromagnetic activity is indeed possible using high-T{sub c} SQUIDs. We believe that our findings secure the future of this exquisitely sensitive technique and have major implications for brain research and the developments of cost-effective multi-channel, high-T{sub c} SQUID-based MEG systems.« less

  13. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  14. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  15. Antenna Deployment for the Localization of Partial Discharges in Open-Air Substations

    PubMed Central

    Robles, Guillermo; Fresno, José Manuel; Sánchez-Fernández, Matilde; Martínez-Tarifa, Juan Manuel

    2016-01-01

    Partial discharges are ionization processes inside or on the surface of dielectrics that can unveil insulation problems in electrical equipment. The charge accumulated is released under certain environmental and voltage conditions attacking the insulation both physically and chemically. The final consequence of a continuous occurrence of these events is the breakdown of the dielectric. The electron avalanche provokes a derivative of the electric field with respect to time, creating an electromagnetic impulse that can be detected with antennas. The localization of the source helps in the identification of the piece of equipment that has to be decommissioned. This can be done by deploying antennas and calculating the time difference of arrival (TDOA) of the electromagnetic pulses. However, small errors in this parameter can lead to great displacements of the calculated position of the source. Usually, four antennas are used to find the source but the array geometry has to be correctly deployed to have minimal errors in the localization. This paper demonstrates, by an analysis based on simulation and also experimentally, that the most common layouts are not always the best options and proposes a simple antenna layout to reduce the systematic error in the TDOA calculation due to the positions of the antennas in the array. PMID:27092501

  16. Sparse source configurations in radio tomography of asteroids

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-07-01

    Our research targets at progress in non-invasive imaging of asteroids to support future planetary research and extra-terrestrial mining activities. This presentation concerns principally radio tomography in which the permittivity distribution inside an asteroid is to be recovered based on the radio frequency signal transmitted from the asteroid's surface and gathered by an orbiter. The focus will be on a sparse distribution (Pursiainen and Kaasalainen, 2013) of signal sources that can be necessary in the challenging in situ environment and within tight payload limits. The general goal in our recent research has been to approximate the minimal number of source positions needed for robust localization of anomalies caused, for example, by an internal void. Characteristic to the localization problem are the large relative changes in signal speed caused by the high permittivity of typical asteroid minerals (e.g. basalt), meaning that a signal path can include strong refractions and reflections. This presentation introduces results of a laboratory experiment in which real travel time data was inverted using a hierarchical Bayesian approach combined with the iterative alternating sequential (IAS) posterior exploration algorithm. Special interest was paid to robustness of the inverse results regarding changes of the prior model and source positioning. According to our results, strongly refractive anomalies can be detected with three or four sources independently of their positioning.

  17. SOLAR HARD X-RAY SOURCE SIZES IN A BEAM-HEATED AND IONIZED CHROMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Flannagain, Aidan M.; Gallagher, Peter T.; Brown, John C.

    2015-02-01

    Solar flare hard X-rays (HXRs) are produced as bremsstrahlung when an accelerated population of electrons interacts with the dense chromospheric plasma. HXR observations presented by Kontar et al. using the Ramaty High-Energy Solar Spectroscopic Imager have shown that HXR source sizes are three to six times more extended in height than those predicted by the standard collisional thick target model (CTTM). Several possible explanations have been put forward including the multi-threaded nature of flare loops, pitch-angle scattering, and magnetic mirroring. However, the nonuniform ionization (NUI) structure along the path of the electron beam has not been fully explored as amore » solution to this problem. Ionized plasma is known to be less effective at producing nonthermal bremsstrahlung HXRs when compared to neutral plasma. If the peak HXR emission was produced in a locally ionized region within the chromosphere, the intensity of emission will be preferentially reduced around this peak, resulting in a more extended source. Due to this effect, along with the associated density enhancement in the upper chromosphere, injection of a beam of electrons into a partially ionized plasma should result in an HXR source that is substantially more vertically extended relative to that for a neutral target. Here we present the results of a modification to the CTTM, which takes into account both a localized form of chromospheric NUI and an increased target density. We find 50 keV HXR source widths, with and without the inclusion of a locally ionized region, of ∼3 Mm and ∼0.7 Mm, respectively. This helps to provide a theoretical solution to the currently open question of overly extended HXR sources.« less

  18. Concentration of heavy metals in drinking water of different localities in district east Karachi.

    PubMed

    Jaleel, M A; Noreen, R; Baseer, A

    2001-01-01

    Several heavy metals are present in drinking water that play important roles in the body provided their level remains within the specified range recommended by WHO. But now due to the industrialization and rapid urbanization, the problems of pollution have surfaced. This study was designed to ascertain the contents of some heavy metals and then their variations if any in drinking water in different localities of district East of Karachi, Pakistan. Drinking water samples were collected from different sources and localities of district East of Karachi. The concentration of the heavy metals i.e. Lead, Arsenic, Copper, Iron, Mercury, Chromium, Manganese, Nickel, Cadmium and Zinc were determined by Atomic Absorption Spectrophotometry. PH was estimated by pH meter. Total dissolved solids (TDS) were calculated by formula. These concentrations of heavy metals, pH and TDS were compared with the standards set by WHO. Concentrations of lead and nickel were found to be significantly elevated as compared to WHO recommended levels in all the three sources of water (Piped water, Hand pump water and Tanker water supply). Chromium was found to be raised in hand pump water. Arsenic and Mercury were not detected in any source of water. Copper, iron, manganese, cadmium and zinc were found to be within the safe limits in all the three sources of water. pH was found to be within the range of WHO recommended level in all the three sources of water. TDS was found to be elevated in hand pump water and tanker water. Concentrations of lead and nickel were found to be significantly elevated as compared to WHO recommended levels in all the three sources of water in district East of Karachi.

  19. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    PubMed

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Conformational Space Annealing explained: A general optimization algorithm, with diverse applications

    NASA Astrophysics Data System (ADS)

    Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung

    2018-02-01

    Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.

  1. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  2. Visualisation of newly synthesised collagen in vitro and in vivo

    PubMed Central

    Oostendorp, Corien; Uijtdewilligen, Peter J.E.; Versteeg, Elly M.; Hafmans, Theo G.; van den Bogaard, Ellen H.; de Jonge, Paul K.J.D.; Pirayesh, Ali; Von den Hoff, Johannes W.; Reichmann, Ernst; Daamen, Willeke F.; van Kuppevelt, Toin H.

    2016-01-01

    Identifying collagen produced de novo by cells in a background of purified collagenous biomaterials poses a major problem in for example the evaluation of tissue-engineered constructs and cell biological studies to tumor dissemination. We have developed a universal strategy to detect and localize newly deposited collagen based on its inherent association with dermatan sulfate. The method is applicable irrespective of host species and collagen source. PMID:26738984

  3. Model-Free Stochastic Localization of CBRN Releases

    DTIC Science & Technology

    2013-01-01

    Ioannis Ch. Paschalidis,‡ Senior Member, IEEE Abstract—We present a novel two-stage methodology for locating a Chemical, Biological, Radiological, or...Nuclear (CBRN) source in an urban area using a network of sensors. In contrast to earlier work, our approach does not solve an inverse dispersion problem...but relies on data obtained from a simulation of the CBRN dispersion to obtain probabilistic descriptors of sensor measurements under a variety of CBRN

  4. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  5. Caresoil: A multidisciplinar Project to characterize, remediate, monitor and evaluate the risk of contaminated soils in Madrid (Spain)

    NASA Astrophysics Data System (ADS)

    Muñoz-Martín, Alfonso; Antón, Loreto; Granja, Jose Luis; Villarroya, Fermín; Montero, Esperanza; Rodríguez, Vanesa

    2016-04-01

    Soil contamination can come from diffuse sources (air deposition, agriculture, etc.) or local sources, these last being related to anthropogenic activities that are potentially soil contaminating activities. According to data from the EU, in Spain, and particularly for the Autonomous Community of Madrid, it can be considered that heavy metals, toxic organic compounds (including Non Aqueous Phases Liquids, NAPLs) and combinations of both are the main problem of point sources of soil contamination in our community. The five aspects that will be applied in Caresoil Program (S2013/MAE-2739) in the analysis and remediation of a local soil contamination are: 1) the location of the source of contamination and characterization of soil and aquifer concerned, 2) evaluation of the dispersion of the plume, 3) application of effective remediation techniques, 4) monitoring the evolution of the contaminated soil and 5) risk analysis throughout this process. These aspects involve advanced technologies (hydrogeology, geophysics, geochemistry,...) that require new developing of knowledge, being necessary the contribution of several researching groups specialized in the fields previously cited, as they are those integrating CARESOIL Program. Actually two cases concerning hydrocarbon spills, as representative examples of soil local contamination in Madrid area, are being studied. The first is being remediated and we are monitoring this process to evaluate its effectiveness. In the second location we are defining the extent of contamination in soil and aquifer to define the most effective remediation technique.

  6. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie

    2018-01-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.

  7. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  8. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  9. A non-invasive implementation of a mixed domain decomposition method for frictional contact problems

    NASA Astrophysics Data System (ADS)

    Oumaziz, Paul; Gosselet, Pierre; Boucard, Pierre-Alain; Guinard, Stéphane

    2017-11-01

    A non-invasive implementation of the Latin domain decomposition method for frictional contact problems is described. The formulation implies to deal with mixed (Robin) conditions on the faces of the subdomains, which is not a classical feature of commercial software. Therefore we propose a new implementation of the linear stage of the Latin method with a non-local search direction built as the stiffness of a layer of elements on the interfaces. This choice enables us to implement the method within the open source software Code_Aster, and to derive 2D and 3D examples with similar performance as the standard Latin method.

  10. Perceived and measured levels of environmental pollution: interdisciplinary research in the subarctic lowlands of northeast European Russia.

    PubMed

    Walker, Tony R; Habeck, Joachim Otto; Karjalainen, Timo P; Virtanen, Tarmo; Solovieva, Nadia; Jones, Viv; Kuhry, Peter; Ponomarev, Vasily I; Mikkola, Kari; Nikula, Ari; Patova, Elena; Crittenden, Peter D; Young, Scott D; Ingold, Tim

    2006-08-01

    Using interdisciplinary field research in the Usa Basin, northeast European Russia, we compared local inhabitants' perception of environmental problems with chemical and remote-sensing signatures of environmental pollution and their local impacts. Extensive coal mining since the 1930s around Inta and Vorkuta has left a legacy of pollution, detected by measuring snowpack, topsoil, and lichen chemistry, together with remote-sensing techniques and analysis of lake water and sediments. Vorkuta and its environs suffered the worst impacts, with significant metal loading and alkalization in lakes and topsoils, elevated metals and cations in terricolous (reindeer) lichens, and changes in vegetation communities. Although the coal industry has declined recently, the area boasts a booming oil and gas industry, based around Usinsk. Local perceptions and concerns of environmental pollution and protection were higher in Usinsk, as a result of increased awareness after a major oil spill in 1994, compared with Vorkuta's inhabitants, who perceived air pollution as the primary environmental threat. Our studies indicate that the principal sources of atmospheric emissions and local deposition within 25 to 40 km of Vorkuta were coal combustion from power and heating plants, coal mines, and a cement factory. Local people evaluated air pollution from direct observations and personal experiences, such as discoloration of snow and respiratory problems, whereas scientific knowledge played a minor role in shaping these perceptions.

  11. New Results from Fermi-LAT and Their Implications for the Nature of Dark Matter and the Origin of Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Moiseev, Alexander

    2009-01-01

    The measured spectrum is compatible with a power law within our current systematic errors. The spectral index (-3.04) is harder than expected from previous experiments and simple theoretical considerations. "Pre-Fermi" diffusive model requires a harder electron injection spectrum (by 0.12) to fit the Fermi data, but inconsistent with positron excess reported by Pamela if it extends to higher energy. Additional component of electron flux from local source(s) may solve the problem; its origin, astrophysical or exotic, is still unclear. Valuable contribution to the calculation of IC component of diffuse gamma radiation.

  12. Workshop on Measurement Needs for Local-Structure Determination in Inorganic Materials

    PubMed Central

    Levin, Igor; Vanderah, Terrell

    2008-01-01

    The functional responses (e.g., dielectric, magnetic, catalytic, etc.) of many industrially-relevant materials are controlled by their local structure—a term that refers to the atomic arrangements on a scale ranging from atomic (sub-nanometer) to several nanometers. Thus, accurate knowledge of local structure is central to understanding the properties of nanostructured materials, thereby placing the problem of determining atomic positions on the nanoscale—the so-called “nanostructure problem”—at the center of modern materials development. Today, multiple experimental techniques exist for probing local atomic arrangements; nonetheless, finding accurate comprehensive, and robust structural solutions for the nanostructured materials still remains a formidable challenge because any one of these methods yields only a partial view of the local structure. The primary goal of this 2-day NIST-sponsored workshop was to bring together experts in the key experimental and theoretical areas relevant to local-structure determination to devise a strategy for the collaborative effort required to develop a comprehensive measurement solution on the local scale. The participants unanimously agreed that solving the nanostructure problem—an ultimate frontier in materials characterization—necessitates a coordinated interdisciplinary effort that transcends the existing capabilities of any single institution, including national laboratories, centers, and user facilities. The discussions converged on an institute dedicated to local structure determination as the most viable organizational platform for successfully addressing the nanostructure problem. The proposed “institute” would provide an intellectual infrastructure for local structure determination by (1) developing and maintaining relevant computer software integrated in an open-source global optimization framework (Fig. 2), (2) connecting industrial and academic users with experts in measurement techniques, (3) developing and maintaining pertinent databases, and (4) providing necessary education and training. PMID:27096131

  13. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  14. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  15. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  16. MTSAT: Full Disk - NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    GOES Himawari-8 Indian Ocean Meteosat HEMISPHERIC GOES Atlantic Source | Local GOES West Himawari-8 Meteosat CONTINENTAL PACUS CONUS Source | Local REGIONAL GOES-West Northwest West Central Southwest GOES -East Regional Page Source | Local Pacific Northwest Source | Local Northern Rockies Source | Local

  17. Structural impact response for assessing railway vibration induced on buildings

    NASA Astrophysics Data System (ADS)

    Kouroussis, Georges; Mouzakis, Harris P.; Vogiatzis, Konstantinos E.

    2018-03-01

    Over the syears, the rapid growth in railway infrastructure has led to numerous environmental challenges. One such significant issue, particularly in urban areas, is ground-borne vibration. A common source of ground-borne vibration is caused by local defects (e.g. rail joints, switches, turnouts, etc.) that generate large amplitude excitations at isolated locations. Modelling these excitation sources is particularly challenging and requires the use of complex and extensive computational efforts. For some situations, the use of experiments and measured data offers a rapid way to estimate the effect of such defects and to evaluate the railway vibration levels using a scoping approach. In this paper, the problem of railway-induced ground vibrations is presented along with experimental studies to assess the ground vibration and ground borne noise levels, with a particular focus on the structural response of sensitive buildings. The behaviour of particular building foundations is evaluated through experimental data collected in Brussels Region, by presenting the expected frequency responses for various types of buildings, taking into account both the soil-structure interaction and the tramway track response. A second study is dedicated to the Athens metro, where transmissibility functions are used to analyse the effect of various Athenian building face to metro network trough comprehensive measurement campaigns. This allows the verification of appropriate vibration mitigation measures. These benchmark applications based on experimental results have been proved to be efficient to treat a complex problem encountered in practice in urban areas, where the urban rail network interacts with important local defects and where the rise of railway ground vibration problems has clearly been identified.

  18. Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions

    NASA Astrophysics Data System (ADS)

    Gutiérrez, David; Nehorai, Arye; Preissl, Hubert

    2005-05-01

    Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.

  19. Multicriteria optimization approach to design and operation of district heating supply system over its life cycle

    NASA Astrophysics Data System (ADS)

    Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał

    2017-11-01

    District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.

  20. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  1. Numerical convergence and validation of the DIMP inverse particle transport model

    DOE PAGES

    Nelson, Noel; Azmy, Yousry

    2017-09-01

    The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less

  2. A numerical solution method for acoustic radiation from axisymmetric bodies

    NASA Technical Reports Server (NTRS)

    Caruthers, John E.; Raviprakash, G. K.

    1995-01-01

    A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.

  3. Trends in characteristics of children served by the Children's Mental Health Initiative: 1994-2007.

    PubMed

    Walrath, Christine; Garraza, Lucas Godoy; Stephens, Robert; Azur, Melissa; Miech, Richard; Leaf, Philip

    2009-11-01

    Data from 14 years of the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program were used to understand the trends of the emotional and behavioral problems and demographic characteristics of children entering services. The data for this study were derived from information collected at intake into service in 90 sites who received their initial federal funding between 1993 and 2004. The findings from this study suggest children entering services later in a site's funding cycle had lower levels of behavioral problems and children served in sites funded later in the 14 year period had higher levels of behavioral problems. Females have consistently entered services with more severe problems and children referred from non-mental health sources, younger children, and those from non-white racial/ethnic backgrounds have entered system of care services with less severe problems. The policy and programming implications, as well as implications for local system of care program development and implementation are discussed.

  4. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  5. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  6. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  7. ON THE LAMPPOST MODEL OF ACCRETING BLACK HOLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedźwiecki, Andrzej; Szanecki, Michał; Zdziarski, Andrzej A.

    2016-04-10

    We study the lamppost model, in which the X-ray source in accreting black hole (BH) systems is located on the rotation axis close to the horizon. We point out a number of inconsistencies in the widely used lamppost model relxilllp, e.g., neglecting the redshift of the photons emitted by the lamppost that are directly observed. They appear to invalidate those model fitting results for which the source distances from the horizon are within several gravitational radii. Furthermore, if those results were correct, most of the photons produced in the lamppost would be trapped by the BH, and the luminosity generatedmore » in the source as measured at infinity would be much larger than that observed. This appears to be in conflict with the observed smooth state transitions between the hard and soft states of X-ray binaries. The required increase of the accretion rate and the associated efficiency reduction also present a problem for active galactic nuclei. Then, those models imply the luminosity measured in the local frame is much higher than that produced in the source and measured at infinity, due to the additional effects of time dilation and redshift, and the electron temperature is significantly higher than that observed. We show that these conditions imply that the fitted sources would be out of the e{sup ±} pair equilibrium. On the other hand, the above issues pose relatively minor problems for sources at large distances from the BH, where relxilllp can still be used.« less

  8. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  9. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  10. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  11. Directional Emission from Dielectric Leaky-Wave Nanoantennas

    NASA Astrophysics Data System (ADS)

    Peter, Manuel; Hildebrandt, Andre; Schlickriede, Christian; Gharib, Kimia; Zentgraf, Thomas; Förstner, Jens; Linden, Stefan

    2017-07-01

    An important source of innovation in nanophotonics is the idea to scale down known radio wave technologies to the optical regime. One thoroughly investigated example of this approach are metallic nanoantennas which employ plasmonic resonances to couple localized emitters to selected far-field modes. While metals can be treated as perfect conductors in the microwave regime, their response becomes Drude-like at optical frequencies. Thus, plasmonic nanoantennas are inherently lossy. Moreover, their resonant nature requires precise control of the antenna geometry. A promising way to circumvent these problems is the use of broadband nanoantennas made from low-loss dielectric materials. Here, we report on highly directional emission from active dielectric leaky-wave nanoantennas made of Hafnium dioxide. Colloidal semiconductor quantum dots deposited in the nanoantenna feed gap serve as a local light source. The emission patterns of active nanoantennas with different sizes are measured by Fourier imaging. We find for all antenna sizes a highly directional emission, underlining the broadband operation of our design.

  12. Modelling of a spread of hazardous substances in a Floreon+ system

    NASA Astrophysics Data System (ADS)

    Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David

    2017-07-01

    This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.

  13. Integral representations of solutions of the wave equation based on relativistic wavelets

    NASA Astrophysics Data System (ADS)

    Perel, Maria; Gorodnitskiy, Evgeny

    2012-09-01

    A representation of solutions of the wave equation with two spatial coordinates in terms of localized elementary ones is presented. Elementary solutions are constructed from four solutions with the help of transformations of the affine Poincaré group, i.e. with the help of translations, dilations in space and time and Lorentz transformations. The representation can be interpreted in terms of the initial-boundary value problem for the wave equation in a half-plane. It gives the solution as an integral representation of two types of solutions: propagating localized solutions running away from the boundary under different angles and packet-like surface waves running along the boundary and exponentially decreasing away from the boundary. Properties of elementary solutions are discussed. A numerical investigation of coefficients of the decomposition is carried out. An example of the decomposition of the field created by sources moving along a line with different speeds is considered, and the dependence of coefficients on speeds of sources is discussed.

  14. Study of atmospheric dynamics and pollution in the coastal area of English Channel using clustering technique

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Dmitriev, Egor; Delbarre, Hervé; Augustin, Patrick; Gengembre, Cyril; Fourmenten, Marc

    2016-04-01

    The problem of atmospheric contamination by principal air pollutants was considered in the industrialized coastal region of English Channel in Dunkirk influenced by north European metropolitan areas. MESO-NH nested models were used for the simulation of the local atmospheric dynamics and the online calculation of Lagrangian backward trajectories with 15-minute temporal resolution and the horizontal resolution down to 500 m. The one-month mesoscale numerical simulation was coupled with local pollution measurements of volatile organic components, particulate matter, ozone, sulphur dioxide and nitrogen oxides. Principal atmospheric pathways were determined by clustering technique applied to backward trajectories simulated. Six clusters were obtained which describe local atmospheric dynamics, four winds blowing through the English Channel, one coming from the south, and the biggest cluster with small wind speeds. This last cluster includes mostly sea breeze events. The analysis of meteorological data and pollution measurements allows relating the principal atmospheric pathways with local air contamination events. It was shown that contamination events are mostly connected with a channelling of pollution from local sources and low-turbulent states of the local atmosphere.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marliana, Ana, E-mail: na-cwith22@yahoo.co.id; Fitriani, Eka; Ramadhan, Fauzan

    Waste fish bones is a problem stemming from activities in the field of fisheries and it has not been used optimally. Fish bones contain calcium as natural source that used to synthesize hydroxyapatite (HA). In this research, HA synthesized from waste fish bones as local wisdom in Semarang. The goal are to produce HA with cheaper production costs and to reduce the environmental problems caused by waste bones. The novelty of this study was using of local fish bone as a source of calcium and simple method of synthesis. Synthesis process of HA can be done through a maceration processmore » with firing temperatures of 1000°C or followed by a sol-gel method with firing at 550°C. The results are analyzed using FTIR (Fourier Transform Infrared), XRD (X-Ray Diffraction) and SEM-EDX (Scanning Electron Microscopy-Energy Dispersive X-Ray). FTIR spectra showed absorption of phosphate and OH group belonging to HA as evidenced by the results of XRD. The average grain size by maceration and synthesized results are not significant different, which is about 69 nm. The ratio of Ca/P of HA by maceration result is 0.89, then increase after continued in the sol-gel process to 1.41. Morphology of HA by maceration results are regular and uniform particle growth, while the morphology of HA after the sol-gel process are irregular and agglomerated.« less

  16. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  17. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  18. Improving MEG source localizations: an automated method for complete artifact removal based on independent component analysis.

    PubMed

    Mantini, D; Franciotti, R; Romani, G L; Pizzella, V

    2008-03-01

    The major limitation for the acquisition of high-quality magnetoencephalography (MEG) recordings is the presence of disturbances of physiological and technical origins: eye movements, cardiac signals, muscular contractions, and environmental noise are serious problems for MEG signal analysis. In the last years, multi-channel MEG systems have undergone rapid technological developments in terms of noise reduction, and many processing methods have been proposed for artifact rejection. Independent component analysis (ICA) has already shown to be an effective and generally applicable technique for concurrently removing artifacts and noise from the MEG recordings. However, no standardized automated system based on ICA has become available so far, because of the intrinsic difficulty in the reliable categorization of the source signals obtained with this technique. In this work, approximate entropy (ApEn), a measure of data regularity, is successfully used for the classification of the signals produced by ICA, allowing for an automated artifact rejection. The proposed method has been tested using MEG data sets collected during somatosensory, auditory and visual stimulation. It was demonstrated to be effective in attenuating both biological artifacts and environmental noise, in order to reconstruct clear signals that can be used for improving brain source localizations.

  19. Guineaworm infection in the Wa district of north-western Ghana.

    PubMed

    Lyons, G R

    1972-01-01

    The Ghana-5 schistosomiasis project is situated in an exclusively rural area of north-western Ghana. Since the inhabitants rely for the most part on natural sources of drinking water the transmission of both urinary schistosomiasis and guineaworm infection must often occur at the same sites, and the epidemiology and the problems of control of these diseases might be expected to have features in common. An epidemiological survey of 8 300 people in 1967-68 showed that guineaworm had a scattered distribution, 35 of 43 villages having an annual incidence of less than 10%. Intensive study of 5 of the most seriously affected villages over a period of 3 years has shown that there is a delicate balance between the parasite and its human host in this area, largely as a result of the impermanent nature of the principal transmission sites, i.e., ponds and the smaller riverine pools. The timing, duration, and intensity of transmission have been shown to vary widely from one locality to another, as well as from year to year. These characteristics are determined by the type and extent of the local source of drinking water, the availability of alternative sources, and the monthly pattern of rainfall.

  20. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  1. Acquisition of ICU data: concepts and demands.

    PubMed

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  3. Stanley Corrsin Award Talk: Fluid Mechanics of Fungi and Slime

    NASA Astrophysics Data System (ADS)

    Brenner, Michael

    2013-11-01

    There are interesting fluid mechanics problems everywhere, even in the most lowly and hidden corners of forest floors. Here I discuss some questions we have been working on in recent years involving fungi and slime. A critical issue for the ecology of fungi and slime is nutrient availability: nutrient sources are highly heterogeneous, and strategies are necessary to find food when it runs out. In the fungal phylum Ascomycota, spore dispersal is the primary mechanism for finding new food sources. The defining feature of this phylum is the ascus, a fluid filled sac from which spores are ejected, through a build up in osmotic pressure. We outline the (largely fluid mechanical) design constraints on this ejection strategy, and demonstrate how it provides strong constraints for the diverse morphologies of spores and asci found in nature. The core of the argument revisits a classical problem in elastohydrodynamic lubrication from a different perspective. A completely different strategy for finding new nutrient is found by slime molds and fungi that stretch out - as a single organism- over enormous areas (up to hectares) over forest floors. As a model problem we study the slime mold Physarum polycephalum, which forages with a large network of connected tubes on the forest floors. Localized regions in the network find nutrient sources and then pump the nutrients throughout the entire organism. We discuss fluid mechanical mechanisms for coordinating this transport, which generalize peristalsis to pumping in a heterogeneous network. We give a preliminary discussion to how physarum can detect a nutrient source and pump the nutrient throughout the organism.

  4. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  5. Gender differences in the processing of standard emotional visual stimuli: integrating ERP and fMRI results

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin

    2005-04-01

    The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.

  6. 25 CFR 47.10 - How is the local educational financial plan developed?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... notes any problem with the plan, he or she must: (i) Notify the local board and local supervisor of the problem within two weeks of receiving the plan; (ii) Make arrangements to assist the local school supervisor and board to correct the problem; and (iii) Refer the problem to the Director of the Office of...

  7. Combined clinical and home rehabilitation: case report of an integrated knowledge-to-action study in a Dutch rehabilitation stroke unit.

    PubMed

    Nanninga, Christa S; Postema, Klaas; Schönherr, Marleen C; van Twillert, Sacha; Lettinga, Ant T

    2015-04-01

    There is growing awareness that the poor uptake of evidence in health care is not a knowledge-transfer problem but rather one of knowledge production. This issue calls for re-examination of the evidence produced and assumptions that underpin existing knowledge-to-action (KTA) activities. Accordingly, it has been advocated that KTA studies should treat research knowledge and local practical knowledge with analytical impartiality. The purpose of this case report is to illustrate the complexities in an evidence-informed improvement process of organized stroke care in a local rehabilitation setting. A participatory action approach was used to co-create knowledge and engage local therapists in a 2-way knowledge translation and multidirectional learning process. Evidence regarding rehabilitation stroke units was applied in a straightforward manner, as the setting met the criteria articulated in stroke unit reviews. Evidence on early supported discharge (ESD) could not be directly applied because of differences in target group and implementation environment between the local and reviewed settings. Early supported discharge was tailored to the needs of patients severely affected by stroke admitted to the local rehabilitation stroke unit by combining clinical and home rehabilitation (CCHR). Local therapists welcomed CCHR because it helped them make their task-specific training truly context specific. Key barriers to implementation were travel time, logistical problems, partitioning walls between financing streams, and legislative procedures. Improving local settings with available evidence is not a straightforward application process but rather a matter of searching, logical reasoning, and creatively working with heterogeneous knowledge sources in partnership with different stakeholders. Multiple organizational levels need to be addressed rather than focusing on therapists as sole site of change. © 2015 American Physical Therapy Association.

  8. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  9. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  10. Complementary feeding recommendations based on locally available foods in Indonesia.

    PubMed

    Fahmida, Umi; Santika, Otte; Kolopaking, Risatianti; Ferguson, Elaine

    2014-12-01

    Affordable, locally contextual complementary feeding recommendations (CFRs) that take into account cultural diversity and differences in food availability will be more likely to result in long-term improvements in complementary feeding practices than general recommendations. More objective approaches, such as linear programming (LP), have been recommended to identify optimal but CFRs to meet nutrient requirements given local food availability, food patterns, food portions, and cost. To present results of our previous studies in which we developed CFRs using LP and to provide an example of how these CFRs can be put into practice in a community intervention trial in Indonesia. Dietary data were obtained using single 24-hour dietary recall or 1-day weighed diet record combined with 1-day 24-hour recall and 5-day food intake tally. With the use of the LP approach, nutrient intakes were optimized while ensuring that a realistic diet was selected by using constraints such as the diet's energy content, food patterns, food portions, and cost. The price per 100 g of edible portion was obtained from market surveys in two or three local markets in each study area. LP analysis was performed using Super Solver in MS Excel or Optifood software. Iron, zinc, calcium, and niacin were problem nutrients in all age groups of children (6 to 8, 9 to 11, and 12 to 23 months) in both rural and periurban areas, except among children of higher socioeconomic status in urban areas. Thiamin and folate were also problem nutrients found in some settings. Animal-source foods (meat, fish, poultry, and eggs [MFPE] and fortified foods were the nutrient-dense foods identified by LP to fill the nutrient gaps of these problem nutrients. Iron, calcium, zinc, niacin, and potentially folate and thiamine are typical "problem nutrients" in complementary foods of Indonesian children. However, the extent of dietary inadequacy varies across age groups, area, and socioeconomic level. MFPE and fortified foods can improve micronutrient adequacy in complementary feeding diets and should be promoted in CFRs.

  11. Mitigating Local Natural Disaster through Social Aware Preparedness Using Complexity Approach

    NASA Astrophysics Data System (ADS)

    Supadli, Irwan; Saputri, Andini; Mawengkang, Herman

    2018-01-01

    During and after natural disaster, such as, eruption of vulcano, many people have to abandon their living place to a temporary shelter. Usually, there could be several time for the occurrence of the eruption. This situation, for example, happened at Sinabung vulcano, located in Karo district of North Sumatera Province, Indonesia. The people in the disaster area have become indifferent. In terms of the society, the local natural disaster problem belong to a complex societal problem. This research is to find a way what should be done to these society to raise their social awareness that they had experienced serious natural disaster and they will be able to live normally and sustainable as before. Societal complexity approach is used to solve the problems. Social studies referred to in this activity are to analyze the social impacts arising from the implementation of the relocation itself. Scope of social impact assessments include are The social impact of the development program of relocation, including the impact of construction activities and long-term impact of construction activity, particularly related to the source and use of clean water, sewerage system, drainage and waste management (solid waste), Social impacts arising associated with occupant relocation sites and the availability of infrastructure (public facilities, include: worship facilities, health and education) in the local environment (pre-existing). Social analysis carried out on the findings of the field, the study related documents and observations of the condition of the existing social environment Siosar settlements.

  12. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  13. A national program for injury prevention in children and adolescents: the injury free coalition for kids.

    PubMed

    Pressley, Joyce C; Barlow, Barbara; Durkin, Maureen; Jacko, Sally A; Dominguez, DiLenny Roca; Johnson, Lenita

    2005-09-01

    Injury is the leading cause of death and a major source of preventable disability in children. Mechanisms of injury are rooted in a complex web of social, economic, environmental, criminal, and behavioral factors that necessitate a multifaceted, systematic injury prevention approach. This article describes the injury burden and the way physicians, community coalitions, and a private foundation teamed to impact the problem first in an urban minority community and then through a national program. Through our injury prevention work in a resource-limited neighborhood, a national model evolved that provides a systematic framework through which education and other interventions are implemented. Interventions are aimed at changing the community and home environments physically (safe play areas and elimination of community and home hazards) and socially (education and supervised extracurricular activities with mentors). This program, based on physician-community partnerships and private foundation financial support, expanded to 40 sites in 37 cities, representing all 10 US trauma regions. Each site is a local adaptation of the Injury Free Coalition model also referred to as the ABC's of injury prevention: A, "analyze injury data through local injury surveillance"; B, "build a local coalition"; C, "communicate the problem and raise awareness that injuries are a preventable public health problem"; D, "develop interventions and injury prevention activities to create safer environments and activities for children"; and E, "evaluate the interventions with ongoing surveillance." It is feasible to develop a comprehensive injury prevention program of national scope using a voluntary coalition of trauma centers, private foundation financial and technical support, and a local injury prevention model with a well-established record of reducing and sustaining lower injury rates for inner-city children and adolescents.

  14. Interictal High Frequency Oscillations Detected with Simultaneous Magnetoencephalography and Electroencephalography as Biomarker of Pediatric Epilepsy

    PubMed Central

    Papadelis, Christos; Tamilia, Eleonora; Stufflebeam, Steven; Grant, Patricia E.; Madsen, Joseph R.; Pearl, Phillip L.; Tanaka, Naoaki

    2016-01-01

    Crucial to the success of epilepsy surgery is the availability of a robust biomarker that identifies the Epileptogenic Zone (EZ). High Frequency Oscillations (HFOs) have emerged as potential presurgical biomarkers for the identification of the EZ in addition to Interictal Epileptiform Discharges (IEDs) and ictal activity. Although they are promising to localize the EZ, they are not yet suited for the diagnosis or monitoring of epilepsy in clinical practice. Primary barriers remain: the lack of a formal and global definition for HFOs; the consequent heterogeneity of methodological approaches used for their study; and the practical difficulties to detect and localize them noninvasively from scalp recordings. Here, we present a methodology for the recording, detection, and localization of interictal HFOs from pediatric patients with refractory epilepsy. We report representative data of HFOs detected noninvasively from interictal scalp EEG and MEG from two children undergoing surgery. The underlying generators of HFOs were localized by solving the inverse problem and their localization was compared to the Seizure Onset Zone (SOZ) as this was defined by the epileptologists. For both patients, Interictal Epileptogenic Discharges (IEDs) and HFOs were localized with source imaging at concordant locations. For one patient, intracranial EEG (iEEG) data were also available. For this patient, we found that the HFOs localization was concordant between noninvasive and invasive methods. The comparison of iEEG with the results from scalp recordings served to validate these findings. To our best knowledge, this is the first study that presents the source localization of scalp HFOs from simultaneous EEG and MEG recordings comparing the results with invasive recordings. These findings suggest that HFOs can be reliably detected and localized noninvasively with scalp EEG and MEG. We conclude that the noninvasive localization of interictal HFOs could significantly improve the presurgical evaluation for pediatric patients with epilepsy. PMID:28060325

  15. Application of Thin-Film Thermocouples to Localized Heat Transfer Measurements

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Smith, F. A.

    1995-01-01

    The paper describes a proof-of-concept experiment on thin-film thermocouples used for localized heat transfer measurements applicable to experiments on hot parts of turbine engines. The paper has three main parts. The first part describes the thin-film sensors and manufacturing procedures. Attention is paid to connections between thin-film thermocouples and lead wires, which has been a source of problems in the past. The second part addresses the test arrangement and facility used for the heat transfer measurements modeling the conditions for upcoming warm turbine tests at NASA LeRC. The paper stresses the advantages of a modular approach to the test rig design. Finally, we present the results of bulk and local heat flow rate measurements, as well as overall heat transfer coefficients obtained from measurements in a narrow passage with an aspect ratio of 11.8. The comparison of bulk and local heat flow rates confirms applicability of thin-film thermocouples to upcoming warm turbine tests.

  16. Spatial and Temporal Dust Source Variability in Northern China Identified Using Advanced Remote Sensing Analysis

    NASA Technical Reports Server (NTRS)

    Taramelli, A.; Pasqui, M.; Barbour, J.; Kirschbaum, D.; Bottai, L.; Busillo, C.; Calastrini, F.; Guarnieri, F.; Small, C.

    2013-01-01

    The aim of this research is to provide a detailed characterization of spatial patterns and temporal trends in the regional and local dust source areas within the desert of the Alashan Prefecture (Inner Mongolia, China). This problem was approached through multi-scale remote sensing analysis of vegetation changes. The primary requirements for this regional analysis are high spatial and spectral resolution data, accurate spectral calibration and good temporal resolution with a suitable temporal baseline. Landsat analysis and field validation along with the low spatial resolution classifications from MODIS and AVHRR are combined to provide a reliable characterization of the different potential dust-producing sources. The representation of intra-annual and inter-annual Normalized Difference Vegetation Index (NDVI) trend to assess land cover discrimination for mapping potential dust source using MODIS and AVHRR at larger scale is enhanced by Landsat Spectral Mixing Analysis (SMA). The combined methodology is to determine the extent to which Landsat can distinguish important soils types in order to better understand how soil reflectance behaves at seasonal and inter-annual timescales. As a final result mapping soil surface properties using SMA is representative of responses of different land and soil cover previously identified by NDVI trend. The results could be used in dust emission models even if they are not reflecting aggregate formation, soil stability or particle coatings showing to be critical for accurately represent dust source over different regional and local emitting areas.

  17. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  18. Multiple-generator errors are unavoidable under model misspecification.

    PubMed

    Jewett, D L; Zhang, Z

    1995-08-01

    Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.

  19. A search for soft X-ray emission from red-giant coronae

    NASA Technical Reports Server (NTRS)

    Margon, B.; Mason, K. O.; Sanford, P. W.

    1974-01-01

    Hills has pointed out that if red-giant coronae are weak sources of soft X-rays, then the problems of the identification of the local component of the soft X-ray background and the observed lack of gas in globular clusters may be simultaneously resolved. Using instrumentation aboard OAO Copernicus, we have searched unsuccessfully for emission in the 10-100 A band from four nearby red giants. In all cases, our upper limits are of the order of the minimum theoretically predicted fluxes.

  20. Analysis of the Performance of Heat Pipes and Phase-Change Materials with Multiple Localized Heat Sources for Space Applications

    DTIC Science & Technology

    1989-05-01

    NUMERICAL ANALYSIS OF STEFAN PROBLEMS FOR GENERALIZED MULTI- DIMENSIONAL PHASE-CHANGE STRUCTURES USING THE ENTHALPY TRANSFORMING MODEL 4.1 Summary...equation St Stefan number, cs(Tm-Tw)/H or cs(Tm-Ti)/H s circumferential distance coordinate, m, Section III s dimensionless interface position along...fluid, kg/m 3 0 viscous dissipation term in the energy eqn. (1.4), Section I; dummy variable, Section IV r dimensionless time, ta/L 2 a Stefan -Boltzmann

  1. Gambling harms and gambling help-seeking amongst indigenous Australians.

    PubMed

    Hing, Nerilee; Breen, Helen; Gordon, Ashley; Russell, Alex

    2014-09-01

    This paper aimed to analyze the harms arising from gambling and gambling-related help-seeking behaviour within a large sample of Indigenous Australians. A self-selected sample of 1,259 Indigenous Australian adults completed a gambling survey at three Indigenous sports and cultural events, in several communities and online. Based on responses to the problem gambling severity index (PGSI), the proportions of the sample in the moderate risk and problem gambler groups were higher than those for the population of New South Wales. Many in our sample appeared to face higher risks with their gambling and experience severe gambling harms. From PGSI responses, notable harms include financial difficulties and feelings of guilt and regret about gambling. Further harms, including personal, relationship, family, community, legal and housing impacts, were shown to be significantly higher for problem gamblers than for the other PGSI groups. Most problem gamblers relied on family, extended family and friends for financial help or went without due to gambling losses. Nearly half the sample did not think they had a problem with gambling but the results show that the majority (57.7 %) faced some risk with their gambling. Of those who sought gambling help, family, extended family, friends and respected community members were consulted, demonstrating the reciprocal obligations underpinning traditional Aboriginal culture. The strength of this finding is that these people are potentially the greatest source of gambling help, but need knowledge and resources to provide that help effectively. Local Aboriginal services were preferred as the main sources of professional help for gambling-related problems.

  2. Is extinction forever?

    PubMed Central

    Bridge, Eli S.; Crawford, Priscilla H. C.; Hough, Daniel J.; Kelly, Jeffrey F.; Patten, Michael A.

    2015-01-01

    Mistrust of science has seeped into public perception of the most fundamental aspect of conservation—extinction. The term ought to be straightforward, and yet, there is a disconnect between scientific discussion and public views. This is not a mere semantic issue, rather one of communication. Within a population dynamics context, we say that a species went locally extinct, later to document its return. Conveying our findings matters, for when we use local extinction, an essentially nonsensical phrase, rather than extirpation, which is what is meant, then we contribute to, if not create outright, a problem for public understanding of conservation, particularly as local extinction is often shortened to extinction in media sources. The public that receives the message of our research void of context and modifiers comes away with the idea that extinction is not forever or, worse for conservation as a whole, that an extinction crisis has been invented. PMID:25711479

  3. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  4. Research for applications of remote sensing to state and local governments (ARSIG)

    NASA Technical Reports Server (NTRS)

    Foster, K. E.; Johnson, J. D.

    1973-01-01

    Remote sensing and its application to problems confronted by local and state planners are reported. The added dimension of remote sensing as a data gathering tool has been explored identifying pertinent land use factors associated with urban growth such as soil associations, soil capability, vegetation distribution, and flood prone areas. Remote sensing within rural agricultural setting has also been utilized to determine irrigation runoff volumes, cropping patterns, and land use. A variety of data sources including U-2 70 mm multispectral black and white photography, RB-57 9-inch color IR, HyAC panoramic color IR and ERTS-1 imagery have been used over selected areas of Arizona including Tucson, Arizona (NASA Test Site #30) and the Sulphur Springs Valley.

  5. Controlling the state of polarization via optical nanoantenna feeding with surface plasmon polaritons

    NASA Astrophysics Data System (ADS)

    Xie, Yu-Bo; Liu, Zheng-Yang; Wang, Qian-Jin; Sun, Guang-Hou; Zhang, Xue-Jin; Zhu, Yong-Yuan

    2016-03-01

    Optical nanoantennas, usually referring to metal structures with localized surface plasmon resonance, could efficiently convert confined optical energy to free-space light, and vice versa. But it is difficult to manipulate the confined visible light energy for its nanoscale spatial extent. Here, a simple method is proposed to solve this problem by controlling surface plasmon polaritons to indirectly manipulate the localized plasmons. As a proof of principle, we demonstrate an optical rotation device which is a grating with central circular polarization optical nanoantenna. It realized the arbitrary optical rotation of linear polarized light by controlling the retard of dual surface plasmon polaritons sources from both side grating structures. Furthermore, we use a two-parameter theoretical model to explain the experimental results.

  6. Selenium in Paleozoic stone coal (carbonaceous shale) as a significant source of environmental contamination in rural southern China

    NASA Astrophysics Data System (ADS)

    Belkin, H. E.; Luo, K.

    2012-04-01

    Selenium occurs in high concentrations (typically > 10 and up to 700 ppm) in organic-rich Paleozoic shales and cherts (called "stone coal" - shíméi), in southern China. Stone coals are black shales that formed in anoxic to euxinic environments and typically contain high concentrations of organic carbon, are enriched in various metals such as V, Mo, Pb, As, Cr, Ni, Se, etc., and are distinguished from "humic" coal in the Chinese literature. We have examined stone coal from Shaanxi, Hubei, and Guizhou Provinces, People's Republic of China and have focused our study on the mode of occurrence of Se and other elements (e.g. As, Pb, etc.) hazardous to human health. Scanning electron microscope, energy-dispersive analysis and electron microprobe wave-length dispersive spectroscopy were used to identify and determine the composition of host phases observed in the stone coals. Native selenium, Se-bearing pyrite and other sulfides are the hosts for Se, although we cannot preclude an organic or clay-mineral association. Stone coals are an important source of fuel (reserves over 1 billion tonnes), both domestically and in small industry, in some rural parts of southern China and present significant environmental problems for the indigenous population. The stone coals create three main environmental problems related to Se pollution. First, the residual soils formed on stone coal are enriched in Se and other metals contained in the stone coals and, depending on the speciation and bioavailability of the metals, may enrich crops and vegetation grown on them. Second, weathering and leaching of the stone coal contaminates the local ground water and/or surface waters with Se and other metals. Third, the local population uses the stone coal as a source of fuel, which releases the more volatile elements (Se and As) into the atmosphere in the homes. The ash will be extremely enriched with the balance of the heavy metal suite. Disposal of the ash on agricultural lands or near water supplies will contaminate both. Human and animal selenosis has been observed in economically and geographically isolated rural communities in areas underlain by stone coal. However, local Public Health officials have adequately dealt with these cases of local selenium poisoning. In Enshi, Hubei Province, Se-contaminated farmland has been replanted with tea and the Se-enriched tea has been marketed nationally.

  7. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  8. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  9. Multiscale modeling of lithium ion batteries: thermal aspects

    PubMed Central

    Zausch, Jochen

    2015-01-01

    Summary The thermal behavior of lithium ion batteries has a huge impact on their lifetime and the initiation of degradation processes. The development of hot spots or large local overpotentials leading, e.g., to lithium metal deposition depends on material properties as well as on the nano- und microstructure of the electrodes. In recent years a theoretical structure emerges, which opens the possibility to establish a systematic modeling strategy from atomistic to continuum scale to capture and couple the relevant phenomena on each scale. We outline the building blocks for such a systematic approach and discuss in detail a rigorous approach for the continuum scale based on rational thermodynamics and homogenization theories. Our focus is on the development of a systematic thermodynamically consistent theory for thermal phenomena in batteries at the microstructure scale and at the cell scale. We discuss the importance of carefully defining the continuum fields for being able to compare seemingly different phenomenological theories and for obtaining rules to determine unknown parameters of the theory by experiments or lower-scale theories. The resulting continuum models for the microscopic and the cell scale are numerically solved in full 3D resolution. The complex very localized distributions of heat sources in a microstructure of a battery and the problems of mapping these localized sources on an averaged porous electrode model are discussed by comparing the detailed 3D microstructure-resolved simulations of the heat distribution with the result of the upscaled porous electrode model. It is shown, that not all heat sources that exist on the microstructure scale are represented in the averaged theory due to subtle cancellation effects of interface and bulk heat sources. Nevertheless, we find that in special cases the averaged thermal behavior can be captured very well by porous electrode theory. PMID:25977870

  10. Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception

    PubMed Central

    Rohe, Tim; Noppeney, Uta

    2015-01-01

    To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328

  11. Occupational hazards in hospitals: accidents, radiation, exposure to noxious chemicals, drug addiction and psychic problems, and assault.

    PubMed Central

    Gestal, J J

    1987-01-01

    Except for infectious diseases all the main occupational hazards affecting health workers are reviewed: accidents (explosions, fires, electrical accidents, and other sources of injury); radiation (stochastic and non-stochastic effects, protective measures, and personnel most at risk); exposure to noxious chemicals, whose effects may be either local (allergic eczema) or generalised (cancer, mutations), particular attention being paid to the hazards presented by formol, ethylene oxide, cytostatics, and anaesthetic gases; drug addiction (which is more common among health workers than the general population) and psychic problems associated with promotion, shift work, and emotional stress; and assault (various types of assault suffered by health workers, its causes, and the characterisation of the most aggressive patients). PMID:3307896

  12. A Review of the Anaerobic Digestion of Fruit and Vegetable Waste.

    PubMed

    Ji, Chao; Kong, Chui-Xue; Mei, Zi-Li; Li, Jiang

    2017-11-01

    Fruit and vegetable waste is an ever-growing global question. Anaerobic digestion techniques have been developed that facilitate turning such waste into possible sources for energy and fertilizer, simultaneously helping to reduce environmental pollution. However, various problems are encountered in applying these techniques. The purpose of this study is to review local and overseas studies, which focus on the use of anaerobic digestion to dispose fruit and vegetable wastes, discuss the acidification problems and solutions in applying anaerobic digestion for fruit and vegetable wastes and investigate the reactor design (comparing single phase with two phase) and the thermal pre-treatment for processing raw wastes. Furthermore, it analyses the dominant microorganisms involved at different stages of digestion and suggests a focus for future studies.

  13. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  14. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  15. Public health applications of remote sensing of the environment, an evaluation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The available techniques were examined in the field of remote sensing (including aerial photography, infrared detection, radar, etc.) and applications to a number of problems in the wide field of public health determined. The specific areas of public health examined included: air pollution, water pollution, communicable disease, and the combined problems of urban growth and the effect of disasters on human communities. The assessment of the possible applications of remote sensing to these problems was made primarily by examination of the available literature in each field, and by interviews with health authorities, physicists, biologists, and other interested workers. Three types of programs employing remote sensors were outlined in the air pollution field: (1) proving ability of sensors to monitor pollutants at three levels of interest - point source, ambient levels in cities, and global patterns; (2) detection of effects of pollutants on the environment at local and global levels; and (3) routine monitoring.

  16. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  17. Ductile failure X-prize.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, James V.; Wellman, Gerald William; Emery, John M.

    2011-09-01

    Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictionsmore » had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.« less

  18. Biclustering as a method for RNA local multiple sequence alignment.

    PubMed

    Wang, Shu; Gutell, Robin R; Miranker, Daniel P

    2007-12-15

    Biclustering is a clustering method that simultaneously clusters both the domain and range of a relation. A challenge in multiple sequence alignment (MSA) is that the alignment of sequences is often intended to reveal groups of conserved functional subsequences. Simultaneously, the grouping of the sequences can impact the alignment; precisely the kind of dual situation biclustering is intended to address. We define a representation of the MSA problem enabling the application of biclustering algorithms. We develop a computer program for local MSA, BlockMSA, that combines biclustering with divide-and-conquer. BlockMSA simultaneously finds groups of similar sequences and locally aligns subsequences within them. Further alignment is accomplished by dividing both the set of sequences and their contents. The net result is both a multiple sequence alignment and a hierarchical clustering of the sequences. BlockMSA was tested on the subsets of the BRAliBase 2.1 benchmark suite that display high variability and on an extension to that suite to larger problem sizes. Also, alignments were evaluated of two large datasets of current biological interest, T box sequences and Group IC1 Introns. The results were compared with alignments computed by ClustalW, MAFFT, MUCLE and PROBCONS alignment programs using Sum of Pairs (SPS) and Consensus Count. Results for the benchmark suite are sensitive to problem size. On problems of 15 or greater sequences, BlockMSA is consistently the best. On none of the problems in the test suite are there appreciable differences in scores among BlockMSA, MAFFT and PROBCONS. On the T box sequences, BlockMSA does the most faithful job of reproducing known annotations. MAFFT and PROBCONS do not. On the Intron sequences, BlockMSA, MAFFT and MUSCLE are comparable at identifying conserved regions. BlockMSA is implemented in Java. Source code and supplementary datasets are available at http://aug.csres.utexas.edu/msa/

  19. A modular Space Station/Base electrical power system - Requirements and design study.

    NASA Technical Reports Server (NTRS)

    Eliason, J. T.; Adkisson, W. B.

    1972-01-01

    The requirements and procedures necessary for definition and specification of an electrical power system (EPS) for the future space station are discussed herein. The considered space station EPS consists of a replaceable main power module with self-contained auxiliary power, guidance, control, and communication subsystems. This independent power source may 'plug into' a space station module which has its own electrical distribution, control, power conditioning, and auxiliary power subsystems. Integration problems are discussed, and a transmission system selected with local floor-by-floor power conditioning and distribution in the station module. This technique eliminates the need for an immediate long range decision on the ultimate space base power sources by providing capability for almost any currently considered option.

  20. Geometric charges in theories of elasticity and plasticity

    NASA Astrophysics Data System (ADS)

    Moshe, Michael

    The mechanics of many natural systems is governed by localized sources of stresses. Examples include ''plastic events'' that occur in amorphous solids under external stress, defects formation in crystalline material, and force-dipoles applied by cells adhered to an elastic substrate. Recent developments in a geometric formulation of elasticity theory paved the way for a unifying mathematical description of such singular sources of stress, as ''elastic charges''. In this talk I will review basic results in this emerging field, focusing on the geometry and mechanics of elastic charges in two-dimensional solid bodies. I will demonstrate the applicability of this new approach in three different problems: failure of an amorphous solid under load, mechanics of Kirigami, and wrinkle patterns in geometrically-incompatible elastic sheets.

  1. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  2. USGS California Water Science Center water programs in California

    USGS Publications Warehouse

    Shulters, Michael V.

    2005-01-01

    California is threatened by many natural hazards—fire, floods, landslides, earthquakes. The State is also threatened by longer-term problems, such as hydrologic effects of climate change, and human-induced problems, such as overuse of ground water and degradation of water quality. The threats and problems are intensified by increases in population, which has risen to nearly 36.8 million. For the USGS California Water Science Center, providing scientific information to help address hazards, threats, and hydrologic issues is a top priority. To meet the demands of a growing California, USGS scientific investigations are helping State and local governments improve emergency management, optimize resources, collect contaminant-source and -mobility information, and improve surface- and ground-water quality. USGS hydrologic studies and data collection throughout the State give water managers quantifiable and detailed scientific information that can be used to plan for development and to protect and more efficiently manage resources. The USGS, in cooperation with state, local, and tribal agencies, operates more than 500 instrument stations, which monitor streamflow, ground-water levels, and surface- and ground-water constituents to help protect water supplies and predict the threats of natural hazards. The following are some of the programs implemented by the USGS, in cooperation with other agencies, to obtain and analyze information needed to preserve California's environment and resources.

  3. Wireless Power Transfer for Distributed Estimation in Sensor Networks

    NASA Astrophysics Data System (ADS)

    Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji

    2017-04-01

    This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.

  4. New Boundary Constraints for Elliptic Systems used in Grid Generation Problems

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.

  5. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  6. Local sources of black walnut recommended for planting in Maryland

    Treesearch

    Silas Little; Calvin F. Bey; Daniel McConaughy

    1974-01-01

    After 5 years, local black walnut seedlings were taller than those of 12 out-of-state sources in a Maryland planting. Seedlings from south-of-local sources out grew trees from northern sources. Genetic influence on height was expressed early--with little change in ranking of sources after the third year.

  7. Transported vs. local contributions from secondary and biomass burning sources to PM2.5

    NASA Astrophysics Data System (ADS)

    Kim, Bong Mann; Seo, Jihoon; Kim, Jin Young; Lee, Ji Yi; Kim, Yumi

    2016-11-01

    The concentration of fine particulates in Seoul, Korea has been lowered over the past 10 years, as a result of the city's efforts in implementing environmental control measures. Yet, the particulate concentration level in Seoul remains high as compared to other urban areas globally. In order to further improve fine particulate air quality in the Korea region and design a more effective control strategy, enhanced understanding of the sources and contribution of fine particulates along with their chemical compositions is necessary. In turn, relative contributions from local and transported sources on Seoul need to be established, as this city is particularly influenced by sources from upwind geographic areas. In this study, PM2.5 monitoring was conducted in Seoul from October 2012 to September 2013. PM2.5 mass concentrations, ions, metals, organic carbon (OC), elemental carbon (EC), water soluble OC (WSOC), humic-like substances of carbon (HULIS-C), and 85 organic compounds were chemically analyzed. The multivariate receptor model SMP was applied to the PM2.5 data, which then identified nine sources and estimated their source compositions as well as source contributions. Prior studies have identified and quantified the transported and local sources. However, no prior studies have distinguished contributions of an individual source between transported contribution and locally produced contribution. We differentiated transported secondary and biomass burning sources from the locally produced secondary and biomass burning sources, which was supported with potential source contribution function (PSCF) analysis. Of the total secondary source contribution, 32% was attributed to transported secondary sources, and 68% was attributed to locally formed secondary sources. Meanwhile, the contribution from the transported biomass burning source was revealed as 59% of the total biomass burning contribution, which was 1.5 times higher than that of the local biomass burning source. Four-season average source contributions from the transported and the local sources were 28% and 72%, respectively.

  8. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  9. High-resolution imaging-guided electroencephalography source localization: temporal effect regularization incorporation in LORETA inverse solution

    NASA Astrophysics Data System (ADS)

    Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben

    2015-11-01

    Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.

  10. Influence of local parameters on the dispersion of traffic-related pollutants within street canyons

    NASA Astrophysics Data System (ADS)

    Karra, Styliani; Malki-Epshtein, Liora; Martin Hyde Collaboration

    2011-11-01

    Ventilation within urban cities and street canyons and the associated air quality is a problem of increasing interest in the last decades. It is important for to minimise exposure of the population to traffic-related pollutants at street level. The residence time of pollutants within the street canyons depends on the meteorological conditions such as wind speed and direction, geometry layout and local parameters (position of traffic lane within the street). An experimental study was carried out to investigate the influence of traffic lane position on the dispersion of traffic-related pollutants within different street canyons geometries: symmetrical (equal building heights on both sides of the street), non-symmetrical (uniform building heights but lower on one side of the street) and heterogeneous (non-uniform building heights on both sides of the street) under constant meteorological conditions. Laboratory experiments were carried out within a water channel and simultaneous measurements of velocity field and concentration scalar levels within and above the street canyons using PIV and PLIF techniques. Traffic -related emissions were simulated using a line emission source. Two positions were examined for all street geometries: line emission source was placed in the centre of the street canyon; line emission source was placed off the centre of the street. TSI Incorporated.

  11. MR-based source localization for MR-guided HDR brachytherapy

    NASA Astrophysics Data System (ADS)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  12. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  13. Feeding ducks, bacterial chemotaxis, and the Gini index

    NASA Astrophysics Data System (ADS)

    Peaudecerf, François J.; Goldstein, Raymond E.

    2015-08-01

    Classic experiments on the distribution of ducks around separated food sources found consistency with the "ideal free" distribution in which the local population is proportional to the local supply rate. Motivated by this experiment and others, we examine the analogous problem in the microbial world: the distribution of chemotactic bacteria around multiple nearby food sources. In contrast to the optimization of uptake rate that may hold at the level of a single cell in a spatially varying nutrient field, nutrient consumption by a population of chemotactic cells will modify the nutrient field, and the uptake rate will generally vary throughout the population. Through a simple model we study the distribution of resource uptake in the presence of chemotaxis, consumption, and diffusion of both bacteria and nutrients. Borrowing from the field of theoretical economics, we explore how the Gini index can be used as a means to quantify the inequalities of uptake. The redistributive effect of chemotaxis can lead to a phenomenon we term "chemotactic levelling," and the influence of these results on population fitness are briefly considered.

  14. [Dilemmas of health financing].

    PubMed

    Herrera Zárate, M; González Torres, R

    1989-01-01

    The economic crisis had had a profound effect on the finances of health services in Mexico. The expenditure on health has decreased, both in absolute terms and in relation to the national gross product. Funding problems have been aggravated by inequities in budget distribution: social security institutions have been favored; geographical distribution of resources is concentrated in the central areas of the country and in the more developed states, and curative health care has prevailed over preventive medicine. Administrative inefficiency hinders even more the appropriate utilization of resources. Diversification of funding sources has been proposed, through external debt, local funding, and specific health taxing. But these proposals are questionable. The high cost of the debt service has reduced international credits as a source of financing. Resource concentration at the federal level, and the different compromises related to the economic solidarity pact have also diminished the potentiality of local state financing. On the other hand, a special health tax is not viable within the current fiscal framework. The alternatives are a better budget planning, a change in the institutional and regional distribution of resources, and improvement in the administrative mechanisms of funding.

  15. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  16. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  17. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  18. Access to health care and community social capital.

    PubMed

    Hendryx, Michael S; Ahern, Melissa M; Lovrich, Nicholas P; McCurdy, Arthur H

    2002-02-01

    To test the hypothesis that variation in reported access to health care is positively related to the level of social capital present in a community. The 1996 Household Survey of the Community Tracking Study, drawn from 22 metropolitan statistical areas across the United States (n = 19,672). Additional data for the 22 communities are from a 1996 multicity broadcast media marketing database, including key social capital indicators, the 1997 National Profile of Local Health Departments survey, and Interstudy, American Hospital Association, and American Medical Association sources. The design is cross-sectional. Self-reported access to care problems is the dependent variable. Independent variables include individual sociodemographic variables, community-level health sector variables, and social capital variables. Data are merged from the various sources and weighted to be population representative and are analyzed using hierarchical categorical modeling. Persons who live in metropolitan statistical areas featuring higher levels of social capital report fewer problems accessing health care. A higher HMO penetration rate in a metropolitan statistical area was also associated with fewer access problems. Other health sector variables were not related to health care access. The results observed for 22 major U.S. cities are consistent with the hypothesis that community social capital enables better access to care, perhaps through improving community accountability mechanisms.

  19. "Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado

    NASA Astrophysics Data System (ADS)

    DeMets, C. M.

    2012-12-01

    Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments for quantitatively determining a generalized answer, it is "currently impossible to state categorically whether or not local food systems emit fewer greenhouse gases than non-local food systems" (Edwards-Jones et al, 2008). Even so, numerous researchers have shown that "83 percent of emissions occur before food even leaves the farm gate" (Weber and Matthews, Garnett, cited in DeWeerdt, 2011); while this doesn't provide any information in terms of local vs. non-local, it is significant when viewed in light of the fact that local farmers tend to have much greater transparency and accountability in their agricultural practices. In other words, "a farmer who sells in the local food economy might be more likely to adopt or continue sustainable practices in order to meet…customer demand" (DeWeerdt, 2011), among other reasons such as environmental concern and desire to support the local economy (DeWeerdt, 2009). In identifying solutions to barriers to locally sourcing food, this research will enable restaurants to overcome these barriers and source their food locally, thereby supporting farmers and their ability to maintain sustainable practices.

  20. Membrane stabilization activity as anti-inflammatory mechanisms of Vernonia amygdalina leaves extracts

    NASA Astrophysics Data System (ADS)

    Nuryanto, MK; Paramita, S.; Iskandar, A.

    2018-04-01

    Inflammation is a normal process in the human body as a response to injury from healing process. Meanwhile, chronic inflammation will cause new health problems to patients. Anti-inflammatory agents generally used for those conditions, have several side effects to patients. The aim of this research was to find alternative anti-inflammatory agents, especially from natural sources. Vernonia amygdalina knew locally as “daun afrika” belong to family Apiaceae is one of those potential natural sources for alternative anti-inflammatory agents. This plant is known astraditionalmedicine from East Kalimantanfor health problems caused by the muscle stiffness and used as material in this research. The experimental method of anti-inflammatory measurement using membrane stabilization activity for V. amygdalina leaves extracts. The results showed that significant differences of EC50(p<0.05)achieved between indomethacin as the positive control (26.39 ± 2.91 µg/mL) with V. amygdalina leaves extracts for concentration 1% (131.81 ± 2.95 µg/mL) and 10% (62.54 ± 2.05 µg/mL). EC50 of V.amygdalina leaves extracts showed the potential anti-inflammatory activities. It could be concluded that V. amygdalina leaves extracts to have anti-inflammatory activities, which could be further developed as a new natural source of the anti-inflammatory agents.

  1. Learning from Data with Heterogeneous Noise using SGD

    PubMed Central

    Song, Shuang; Chaudhuri, Kamalika; Sarwate, Anand D.

    2015-01-01

    We consider learning from data of variable quality that may be obtained from different heterogeneous sources. Addressing learning from heterogenous data in its full generality is a challenging problem. In this paper, we adopt instead a model in which data is observed through heterogeneous noise, where the noise level reflects the quality of the data source. We study how to use stochastic gradient algorithms to learn in this model. Our study is motivated by two concrete examples where this problem arises naturally: learning with local differential privacy based on data from multiple sources with different privacy requirements, and learning from data with labels of variable quality. The main contribution of this paper is to identify how heterogeneous noise impacts performance. We show that given two datasets with heterogeneous noise, the order in which to use them in standard SGD depends on the learning rate. We propose a method for changing the learning rate as a function of the heterogeneity, and prove new regret bounds for our method in two cases of interest. Experiments on real data show that our method performs better than using a single learning rate and using only the less noisy of the two datasets when the noise level is low to moderate. PMID:26705435

  2. Single photon emission computed tomography-guided Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie

    2012-07-01

    Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.

  3. A Comparative Study of Behavior Problems among Left-Behind Children, Migrant Children and Local Children.

    PubMed

    Hu, Hongwei; Gao, Jiamin; Jiang, Haochen; Jiang, Haixia; Guo, Shaoyun; Chen, Kun; Jin, Kaili; Qi, Yingying

    2018-04-01

    This study aims to estimate the prevalence of behavioral problems among left-behind children, migrant children and local children in China, and to compare the risks of behavioral problems among the three types of children. Data on 4479 children aged 6-16 used in this study were from a survey conducted in China in 2017. The school-age version of the Children Behavior Checklist was used to measure children's behavioral problems. Descriptive analysis, correlation analysis, and logistic regressions were conducted. The prevalence of behavioral problems was 18.80% and 13.59% for left-behind children and migrant children, respectively, both of which were higher than that of local children. Logistic regression analysis showed that after adjustments for individual and environmental variables, the likelihood of total, internalizing and externalizing behavior problems for left-behind children and migrant children were higher than those for local children; left-behind children had a higher likelihood of internalizing problems than externalizing problems, while migrant children had a higher prevalence of externalizing problems. Left-behind children had a higher prevalence of each specific syndrome than migrant and local children. Both individual and environmental factors were associated with child behavioral problems, and family migration may contribute to the increased risks. Left-behind and migrant children were more vulnerable than local children to behavioral problems.

  4. A Comparative Study of Behavior Problems among Left-Behind Children, Migrant Children and Local Children

    PubMed Central

    Hu, Hongwei; Gao, Jiamin; Jiang, Haochen; Jiang, Haixia; Guo, Shaoyun; Chen, Kun; Jin, Kaili; Qi, Yingying

    2018-01-01

    This study aims to estimate the prevalence of behavioral problems among left-behind children, migrant children and local children in China, and to compare the risks of behavioral problems among the three types of children. Data on 4479 children aged 6–16 used in this study were from a survey conducted in China in 2017. The school-age version of the Children Behavior Checklist was used to measure children’s behavioral problems. Descriptive analysis, correlation analysis, and logistic regressions were conducted. The prevalence of behavioral problems was 18.80% and 13.59% for left-behind children and migrant children, respectively, both of which were higher than that of local children. Logistic regression analysis showed that after adjustments for individual and environmental variables, the likelihood of total, internalizing and externalizing behavior problems for left-behind children and migrant children were higher than those for local children; left-behind children had a higher likelihood of internalizing problems than externalizing problems, while migrant children had a higher prevalence of externalizing problems. Left-behind children had a higher prevalence of each specific syndrome than migrant and local children. Both individual and environmental factors were associated with child behavioral problems, and family migration may contribute to the increased risks. Left-behind and migrant children were more vulnerable than local children to behavioral problems. PMID:29614783

  5. 45 CFR 2551.92 - What are project funding requirements?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... local funding sources during the first three years of operations; (ii) An economic downturn, the...

  6. 45 CFR 2552.92 - What are project funding requirements?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... local funding sources during the first three years of operations; or (2) An economic downturn, the... sources of local funding support; or (3) The unexpected discontinuation of local support from one or more... the development of local funding sources during the first three years of operations; or (ii) An...

  7. [Current treatment situation and progress on bone defect of collapsed tibial plateau fractures].

    PubMed

    Luo, Chang-qi; Fang, Yue; Tu, Chong-qi; Yang, Tian-fu

    2016-02-01

    Characteristics of collapsed tibial plateau fracture determines that the joint surface must remain anatomical reduction,line of force in tibial must exist and internal fixation must be strong. However, while renewing articular surface smoothness, surgeons have a lot of problems in dealing with bone defect under the joint surface. Current materials used for bone defect treatment include three categories: autologous bone, allograft bone and bone substitutes. Some scholars think that autologous bone grafts have a number of drawbacks, such as increasing trauma, prolonged operation time, the limited source, bone area bleeding,continuous pain, local infection and anesthesia,but most scholars believe that the autologous cancellous bone graft is still the golden standard. Allograft bone has the ability of bone conduction, but the existence of immune responses, the possibility of a virus infection, and the limited source of the allograft cannot meet the clinical demands. Likewise, bone substitutes have the problem that osteogenesis does not match with degradation in rates. Clinical doctors can meet the demand of the patient's bone graft according to patient's own situation and economic conditions.

  8. HPSLPred: An Ensemble Multi-Label Classifier for Human Protein Subcellular Location Prediction with Imbalanced Source.

    PubMed

    Wan, Shixiang; Duan, Yucong; Zou, Quan

    2017-09-01

    Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Palestinian mothers' perceptions of child mental health problems and services

    PubMed Central

    THABET, ABDEL AZIZ; EL GAMMAL, HOSSAM; VOSTANIS, PANOS

    2006-01-01

    The aim of this study was to explore Palestinian mothers' perceptions of child mental health problems and their understanding of their causes; to determine Palestinian mothers' awareness of existing services and sources of help and support; to identify professionals in the community whom Palestinian mothers would consult if their child had mental health problems; and to establish their views on ways of increasing awareness of child mental health issues and services. Checklists exploring the above issues were completed by 249 Palestinian mothers living in refugee camps in the Gaza Strip. Palestinian mothers equally perceived emotional, behavioural and psychotic symptoms as suggestive of mental ill health in childhood. Mothers perceived multiple causes of child mental health problems, including family problems, parental psychiatric illness and social adversity. A substantial proportion (42.6%) had knowledge of local child mental health care services. Overall, mothers preferred Western over traditional types of treatment, and were keen to increase mental health awareness within their society. Despite a different cultural tradition, Palestinian mothers appear open to a range of services and interventions for child mental health problems. As in other non-Western societies, child mental health service provision should be integrated with existing primary health care, schools, and community structures. PMID:16946953

  10. The FieldTrip-SimBio pipeline for EEG forward solutions.

    PubMed

    Vorwerk, Johannes; Oostenveld, Robert; Piastra, Maria Carla; Magyari, Lilla; Wolters, Carsten H

    2018-03-27

    Accurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis. The FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections. The source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies. The presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.

  11. Health impacts of coal and coal use: Possible solutions

    USGS Publications Warehouse

    Finkelman, R.B.; Orem, W.; Castranova, V.; Tatu, C.A.; Belkin, H.E.; Zheng, B.; Lerch, H.E.; Maharaj, S.V.; Bates, A.L.

    2002-01-01

    Coal will be a dominant energy source in both developed and developing countries for at least the first half of the 21st century. Environmental problems associated with coal, before mining, during mining, in storage, during combustion, and postcombustion waste products are well known and are being addressed by ongoing research. The connection between potential environmental problems with human health is a fairly new field and requires the cooperation of both the geoscience and medical disciplines. Three research programs that illustrate this collaboration are described and used to present a range of human health problems that are potentially caused by coal. Domestic combustion of coal in China has, in some cases, severely affected human health. Both on a local and regional scale, human health has been adversely affected by coals containing arsenic, fluorine, selenium, and possibly, mercury. Balkan endemic nephropathy (BEN), an irreversible kidney disease of unknown origin, has been related to the proximity of Pliocene lignite deposits. The working hypothesis is that groundwater is leaching toxic organic compounds as it passes through the lignites and that these organics are then ingested by the local population contributing to this health problem. Human disease associated with coal mining mainly results from inhalation of particulate matter during the mining process. The disease is Coal Worker's Pneumoconiosis characterized by coal dust-induced lesions in the gas exchange regions of the lung; the coal worker's "black lung disease". ?? 2002 Elsevier Science B.V. All rights reserved.

  12. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  13. Methodes entropiques appliquees au probleme inverse en magnetoencephalographie

    NASA Astrophysics Data System (ADS)

    Lapalme, Ervig

    2005-07-01

    This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.

  14. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  15. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we introduce the vbICA technique and present its application on synthetic data that simulate a GPS network recording ground deformation in a tectonically active region, with synthetic time-series containing interseismic, coseismic, and postseismic deformation, plus seasonal deformation, and white and coloured noise. We study the ability of the algorithm to recover the original (known) sources of deformation, and then apply it to a real scenario: the Emilia seismic sequence (2012, northern Italy), which is an example of seismic sequence occurred in a slowly converging tectonic setting, characterized by several local to regional anthropogenic or natural sources of deformation, mainly subsidence due to fluid withdrawal and sediments compaction. We apply both PCA and vbICA to displacement time-series recorded by continuous GPS and InSAR (Pezzo et al., EGU2015-8950).

  16. COMBINED DELAY AND GRAPH EMBEDDING OF EPILEPTIC DISCHARGES IN EEG REVEALS COMPLEX AND RECURRENT NONLINEAR DYNAMICS.

    PubMed

    Erem, B; Hyde, D E; Peters, J M; Duffy, F H; Brooks, D H; Warfield, S K

    2015-04-01

    The dynamical structure of the brain's electrical signals contains valuable information about its physiology. Here we combine techniques for nonlinear dynamical analysis and manifold identification to reveal complex and recurrent dynamics in interictal epileptiform discharges (IEDs). Our results suggest that recurrent IEDs exhibit some consistent dynamics, which may only last briefly, and so individual IED dynamics may need to be considered in order to understand their genesis. This could potentially serve to constrain the dynamics of the inverse source localization problem.

  17. Image Understanding Workshop. Proceedings of a Workshop Held in San Diego, California on January 26-29, 1992

    DTIC Science & Technology

    1992-01-01

    studied . shows the B-spline fit on the grouped curves and the local symmetries detected (their axes) (output of steps 1 and 4 OBJECT RECOGNITION 2.a...positioned so that the specular Our primary study (Krumm and Shafer) has been on lobes of each light source do not intersect. The four lights the...segregation with a 3D representation is a con- The problem of dot clustering can also be studied from sequence of grouping processes. A 3D

  18. Analysis of the energy efficiency of the implementation power electric generated modules in the CHS

    NASA Astrophysics Data System (ADS)

    Sukhikh, A. A.; Milyutin, V. A.; Lvova, A. M.

    2017-11-01

    Application on the Central heat source (CHS) local generation of electricity is primarily aimed at solving problems of own needs of electric energy that not only guarantees the independence of the work of the CHS from external electrical networks, but will prevent the stop of heat supply of consumers and defrosting heating networks in case of accidents in electrical networks caused by natural or anthropogenic factors. Open the prospects of electric power supply stand-alone objects, such commercial or industrial objects on the territory of a particular neighborhood.

  19. Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses.

    PubMed

    Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias

    2018-06-01

    Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform

    NASA Astrophysics Data System (ADS)

    Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.

    2018-04-01

    To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.

  1. A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.

    PubMed

    Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie

    2016-09-01

    Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.

  2. Source appointment of fine particle number and volume concentration during severe haze pollution in Beijing in January 2013.

    PubMed

    Liu, Zirui; Wang, Yuesi; Hu, Bo; Ji, Dongsheng; Zhang, Junke; Wu, Fangkun; Wan, Xin; Wang, Yonghong

    2016-04-01

    Extreme haze episodes repeatedly shrouded Beijing during the winter of 2012-2013, causing major environmental and health problems. To better understand these extreme events, particle number size distribution (PNSD) and particle chemical composition (PCC) data collected in an intensive winter campaign in an urban site of Beijing were used to investigate the sources of ambient fine particles. Positive matrix factorization (PMF) analysis resolved a total of eight factors: two traffic factors, combustion factors, secondary aerosol, two accumulation mode aerosol factors, road dust, and long-range transported (LRT) dust. Traffic emissions (54%) and combustion aerosol (27%) were found to be the most important sources for particle number concentration, whereas combustion aerosol (33%) and accumulation mode aerosol (37%) dominated particle volume concentrations. Chemical compositions and sources of fine particles changed dynamically in the haze episodes. An enhanced role of secondary inorganic species was observed in the formation of haze pollution. Regional transport played an important role for high particles, contribution of which was on average up to 24-49% during the haze episodes. Secondary aerosols from urban background presented the largest contributions (45%) for the rapid increase of fine particles in the severest haze episode. In addition, the invasion of LRT dust aerosols further elevated the fine particles during the extreme haze episode. Our results showed a clear impact of regional transport on the local air pollution, suggesting the importance of regional-scale emission control measures in the local air quality management of Beijing.

  3. Alternate Sources for Propellant Ingredients.

    DTIC Science & Technology

    1976-07-07

    0dJ variety of reasons; (3) sole source; (4) medical/ OSHA /EPA problems; (5) dependent on foreign Imports; and (6) specification problems. •’. .’ . . I...problems exist for a variety of reasons; (3) sole sourc:e; (4) medical/ OSHA /EPA problems; (5) dependent on foreign imports; and (6) specification problems...regulations of OSHA or EPA affect pro- duction or use of the product; 5. Plant capacity - when demand increases faster that; predictions; 6. Supply

  4. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  5. Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †

    PubMed Central

    Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke

    2015-01-01

    This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420

  6. Increasing the supply of kidneys for transplantation by making living donors the preferred source of donor kidneys.

    PubMed

    Testa, Giuliano; Siegler, Mark

    2014-12-01

    At the present time, increasing the use of living donors offers the best solution to the organ shortage problem. The clinical questions raised when the first living donor kidney transplant was performed, involving donor risk, informed consent, donor protection, and organ quality, have been largely answered. We strongly encourage a wider utilization of living donation and recommend that living donation, rather than deceased donation, become the first choice for kidney transplantation. We believe that it is ethically sound to have living kidney donation as the primary source for organs when the mortality and morbidity risks to the donor are known and kept extremely low, when the donor is properly informed and protected from coercion, and when accepted national and local guidelines for living donation are followed.

  7. Improving the frequency precision of oscillators by synchronization.

    PubMed

    Cross, M C

    2012-04-01

    Improving the frequency precision by synchronizing a lattice of N oscillators with disparate frequencies is studied in the phase reduction limit. In the general case where the coupling is not purely dissipative the synchronized state consists of targetlike waves radiating from a local source, which is a region of higher-frequency oscillators. In this state the improvement of the frequency precision is shown to be independent of N for large N, but instead depends on the disorder and reflects the dependence of the frequency of the synchronized state on just those oscillators in the source region of the waves. These results are obtained by a mapping of the nonlinear phase dynamics onto the linear Anderson problem of the quantum mechanics of electrons on a random lattice in the tight-binding approximation.

  8. 3D source localization of interictal spikes in epilepsy patients with MRI lesions

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin

    2006-08-01

    The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.

  9. Numerical Study of Nonlinear Structures of Locally Excited Marangoni Convection in the Long-Wave Approximation

    NASA Astrophysics Data System (ADS)

    Wertgeim, Igor I.

    2018-02-01

    We investigate stationary and non-stationary solutions of nonlinear equations of the long-wave approximation for the Marangoni convection caused by a localized source of heat or a surface active impurity (surfactant) in a thin horizontal layer of a viscous incompressible fluid with a free surface. The distribution of heat or concentration flux is determined by the uniform vertical gradient of temperature or impurity concentration, distorted by the imposition of a slightly inhomogeneous heating or of surfactant, localized in the horizontal plane. The lower boundary of the layer is considered thermally insulated or impermeable, whereas the upper boundary is free and deformable. The equations obtained in the long-wave approximation are formulated in terms of the amplitudes of the temperature distribution or impurity concentration, deformation of the surface, and vorticity. For a simplification of the problem, a sequence of nonlinear equations is obtained, which in the simplest form leads to a nonlinear Schrödinger equation with a localized potential. The basic state of the system, its dependence on the parameters and stability are investigated. For stationary solutions localized in the region of the surface tension inhomogeneity, domains of parameters corresponding to different spatial patterns are delineated.

  10. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  11. The factor of local cultural specificity and process of globalization.

    PubMed

    Rudnev, Viacheslav

    2012-12-01

    Cultural polymorphism is a difficult phenomenon, which has multiform influence on the society's life. The active interest of society to local folk knowledge in life-support activities and Nature using is one of the distinctive marks of modern time. This interest has fallen on the period of active transformations of environment as a result of industrial society's pressing on Nature, and the generating of new approaches in the studying of Nature and human activity based on the "technologies" of wild life. The success of humankind in creating artificial surroundings has led to both great success in improving the quality of peoples' lives, and also to problems with renewable resources and human health and to changing for the worse ecology. In 1992 the Unites Nations Conference on Environment and Development (Rio de Janeiro, Brazil) set fixed standards defining global violations of the environment. The zAgenda 21', adopted at this Conference, focused on the necessity of new solutions for problems of the relationships between Nature and Society, mentioning interdisciplinary research as a positive way to search for solutions to new problems, and citing as a goal a zbalance of Nature, Society and Humans'. Pre-industrial society had a different experience in using Nature and solving problems of life-support activity under a regime of sparing nature. Experience has shown that Folk knowledge and Folk technology can, in a number of instances, actually assist in solving high level problems caused by human impact on the environment, e.g., farming methods, and, as a result offering possibilities for a more sound and at the same time effective basis for long-term sustainable production at the local level. The traditional cultures of Eurasia were engaged in agricultural pursuits and had acquired unique experiences in maintaining soil fertility and a technology which limited the impact they were having on the environment. The value of Folk heritage in exploiting the environment, especially farming traditions in small-scale, non-industrial societies, has been based not only on the technologies that are "friendly" to Nature, but also (and first of all) on the perception that soil (earth) is the source of all life. This sort of perception was particularly widespread among peoples of pre-industrial societies. The problem of searching for a way to increase long-term productivity in food grain production is complicated. This problem is of global importance for today and the future. The active interest of Modern society in the Folk experiences of using the Nature to achieve sustainable economies is yet to come, but we have much to learn from these small-scale non-industrial societies. Food production needs to be increased. At the same time, the fertility of the soil must be maintained. Achieving a balance between these two necessities is the problem. Changing the present modern human outlook from its egocentric position to one that understands and respects the natural environment, based on ideas of "ecological ethics", looks especially complex, and is directly connected with the problem of forming a new culture. Actually, the global ecological crisis and related ecological problems take priority and the transition to a new model of thinking promises to be accelerated. In this context, making use of Folk heritage, Folk knowledge and experience in observing Nature and using Nature to achieve harmonious interrelations in a "Nature - Society" system, and for the elaboration of a change of attitudes is quite important for modern society on a Global level to achieve ways of Sustainability. Lucius Seneca maintains that subjugation of a Nature is possible only if obeying to Nature. Modern epoch of Globalization in economy and Financial systems creating a potential of high risks for mankind on the Global level. Special attention to local factors (local experience in Nature using, local Folk experience in Life-support activity) in context of globalization problems is important today. Actually, Glocalization can assist in adaptation process of harmonizing local and global needs to a way of Sustainability. Glocalization puts globalization problems down to the human scale. The age of Globalization has made the problem of cultural dialog extra actual, otherwise the Mankind has no chance to survive. The Glocalization is the process of creation of a harmony in Nature, Society and Humans system in the context of Sustainability.

  12. Precipitation Recycling and the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Atlas, Robert (Technical Monitor)

    2002-01-01

    Precipitation recycling is defined as the amount of water that evaporates from a region that precipitates within the same region. This is also interpreted as the local source of water for precipitation. In this study, the local and remote sources of water for precipitation have been diagnosed through the use of passive constituent tracers that represent regional evaporative sources along with their transport and precipitation. We will discuss the differences between this method and the simpler bulk diagnostic approach to precipitation recycling. A summer seasonal simulation has been analyzed for the regional sources of the United States Great Plains precipitation. While the tropical Atlantic Ocean (including the Gulf of Mexico) and the local continental sources of precipitation are most dominant, the vertically integrated column of water contains substantial water content originating from the Northern Pacific Ocean, which is not precipitated. The vertical profiles of regional water sources indicate that local Great Plains source of water dominates the lower troposphere, predominantly in the PBL. However, the Pacific Ocean source is dominant over a large portion of the middle to upper troposphere. The influence of the tropical Atlantic Ocean is reasonably uniform throughout the column. While the results are not unexpected given the formulation of the model's convective parameterization, the analysis provides a quantitative assessment of the impact of local evaporation on the occurrence of convective precipitation in the GCM. Further, these results suggest that local source of water is not well mixed throughout the vertical column.

  13. Efficient linear criterion for witnessing Einstein-Podolsky-Rosen nonlocality under many-setting local measurements

    NASA Astrophysics Data System (ADS)

    Zheng, Yu-Lin; Zhen, Yi-Zheng; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai; Pan, Jian-Wei

    2017-01-01

    The striking and distinctive nonlocal features of quantum mechanics were discovered by Einstein, Podolsky, and Rosen (EPR) beyond classical physics. At the core of the EPR argument, it was "steering" that Schrödinger proposed in 1935. Besides its fundamental significance, quantum steering opens up a novel application for quantum communication. Recent work has precisely characterized its properties; however, witnessing the EPR nonlocality remains a big challenge under arbitrary local measurements. Here we present an alternative linear criterion and complement existing results to efficiently testify steering for high-dimensional system in practice. By developing a novel and analytical method to tackle the maximization problem in deriving the bound of a steering criterion, we show how observed correlations can reveal powerfully the EPR nonlocality in an easily accessed manner. Although the criteria is not necessary and sufficient, it can recover some of the known results under a few settings of local measurements and is applicable even if the size of the system or the number of measurement settings are high. Remarkably, a deep connection is explicitly established between the steering and amount of entanglement. The results promise viable paths for secure communication with an untrusted source, providing optional loophole-free tests of the EPR nonlocality for high-dimensional states, as well as motivating solutions for other related problems in quantum information theory.

  14. Local residue coupling strategies by neural network for InSAR phase unwrapping

    NASA Astrophysics Data System (ADS)

    Refice, Alberto; Satalino, Giuseppe; Chiaradia, Maria T.

    1997-12-01

    Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.

  15. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  16. Geometry and mechanics of two-dimensional defects in amorphous materials

    PubMed Central

    Moshe, Michael; Levin, Ido; Aharoni, Hillel; Kupferman, Raz; Sharon, Eran

    2015-01-01

    We study the geometry of defects in amorphous materials and their elastic interactions. Defects are defined and characterized by deviations of the material’s intrinsic metric from a Euclidian metric. This characterization makes possible the identification of localized defects in amorphous materials, the formulation of a corresponding elastic problem, and its solution in various cases of physical interest. We present a multipole expansion that covers a large family of localized 2D defects. The dipole term, which represents a dislocation, is studied analytically and experimentally. Quadrupoles and higher multipoles correspond to fundamental strain-carrying entities. The interactions between those entities, as well as their interaction with external stress fields, are fundamental to the inelastic behavior of solids. We develop analytical tools to study those interactions. The model, methods, and results presented in this work are all relevant to the study of systems that involve a distribution of localized sources of strain. Examples are plasticity in amorphous materials and mechanical interactions between cells on a flexible substrate. PMID:26261331

  17. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  18. Spontaneous collapse: A solution to the measurement problem and a source of the decay in mesonic systems

    NASA Astrophysics Data System (ADS)

    Simonov, Kyrylo; Hiesmayr, Beatrix C.

    2016-11-01

    Dynamical reduction models propose a solution to the measurement problem in quantum mechanics: the collapse of the wave function becomes a physical process. We compute the predictions to decaying and flavor-oscillating neutral mesons for the two most promising collapse models, the QMUPL (quantum mechanics with universal position localization) model and the mass-proportional CSL (continuous spontaneous localization) model. Our results are showing (i) a strong sensitivity to the very assumptions of the noise field underlying those two collapse models and (ii) under particular assumptions the CSL case allows one even to recover the decay dynamics. This in turn allows one to predict the effective collapse rates solely based on the measured values for the oscillation (mass differences) and the measured values of the decay constants. The four types of neutral mesons (K meson, D meson, Bd meson, and Bs meson) lead surprisingly to ranges comparable to those put forward by Adler [J. Phys. A: Math. Theor. 40, 2935 (2007), 10.1088/1751-8113/40/12/S03] and Ghirardi, Rimini, and Weber [Phys. Rev. D 34, 470 (1986), 10.1103/PhysRevD.34.470]. Our results show that these systems at high energies are very sensitive to possible modifications of the standard quantum theory, making them a very powerful laboratory to rule out certain collapse scenarios and study the detailed physical processes solving the measurement problem.

  19. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  20. Centralized Authorization Using a Direct Service, Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, A

    Authorization is the process of deciding if entity X is allowed to have access to resource Y. Determining the identity of X is the job of the authentication process. One task of authorization in computer networks is to define and determine which user has access to which computers in the network. On Linux, the tendency exists to create a local account for each single user who should be allowed to logon to a computer. This is typically the case because a user not only needs login privileges to a computer but also additional resources like a home directory to actuallymore » do some work. Creating a local account on every computer takes care of all this. The problem with this approach is that these local accounts can be inconsistent with each other. The same user name could have a different user ID and/or group ID on different computers. Even more problematic is when two different accounts share the same user ID and group ID on different computers: User joe on computer1 could have user ID 1234 and group ID 56 and user jane on computer2 could have the same user ID 1234 and group ID 56. This is a big security risk in case shared resources like NFS are used. These two different accounts are the same for an NFS server so that these users can wipe out each other's files. The solution to this inconsistency problem is to have only one central, authoritative data source for this kind of information and a means of providing all your computers with access to this central source. This is what a ''Directory Service'' is. The two directory services most widely used for centralizing authorization data are the Network Information Service (NIS, formerly known as Yellow Pages or YP) and Lightweight Directory Access Protocol (LDAP).« less

  1. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  2. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  3. Potentials and problems of building detailed dust records using peat archives: An example from Store Mosse (the "Great Bog"), Sweden

    NASA Astrophysics Data System (ADS)

    Kylander, Malin E.; Martínez-Cortizas, Antonio; Bindler, Richard; Greenwood, Sarah L.; Mörth, Carl-Magnus; Rauch, Sebastien

    2016-10-01

    Mineral dust deposition is a process often overlooked in northern mid-latitudes, despite its potential effects on ecosystems. These areas are often peat-rich, providing ample material for the reconstruction of past changes in atmospheric deposition. The highly organic (up to 99% in some cases) matrix of atmospherically fed mires, however, makes studying the actual dust particles (grain size, mineralogy) challenging. Here we explore some of the potentials and problems of using geochemical data from conservative, lithogenic elements (Al, Ga, Rb, Sc, Y, Zr, Th, Ti and REE) to build detailed dust records by using an example from the 8900-yr peat sequence from Store Mosse (the ;Great Bog;), which is the largest mire complex in the boreo-nemoral region of southern Sweden. The four dust events recorded at this site were elementally distinct, suggesting different dominant mineral hosts. The oldest and longest event (6385-5300 cal yr BP) sees a clear signal of clay input but with increasing contributions of mica, feldspar and middle-REE-rich phosphate minerals over time. These clays are likely transported from a long-distance source (<100 km). While dust deposition was reduced during the second event (5300-4370 cal yr BP), this is the most distinct in terms of its source character with [Eu/Eu∗]UCC revealing the input of plagioclase feldspar from a local source, possibly active during this stormier period. The third (2380-2200 cal yr BP) and fourth (1275-1080 cal yr BP) events are much shorter in duration and the presence of clays and heavy minerals is inferred. Elemental mass accumulation rates reflect these changes in mineralogy where the relative importance of the four dust events varies by element. The broad changes in major mineral hosts, grain size, source location and approximated net dust deposition rates observed in the earlier dust events of longer duration agree well with paleoclimatic changes observed in northern Europe. The two most recent dust events are much shorter in duration, which in combination with evidence of their local and regional character, may explain why they have not been seen elsewhere.

  4. Developing a European urban health indicator system: results of EURO-URHIS 1.

    PubMed

    Patterson, Lesley; Heller, Richard; Robinson, Jude; Birt, Christopher A; van Ameijden, Erik; Bocsan, Ioan; White, Chris; Skalkidis, Yannis; Bothra, Vinay; Onyia, Ifeoma; Hellmeier, Wolfgang; Lyshol, Heidi; Gemmell, Isla; Spencer, Angela; Klumbiene, Jurate; Krampac, Igor; Rajnicova, Iveta; Macherianakis, Alexis; Bourke, Michael; Harrison, Annie; Verma, Arpana

    2017-05-01

    More than half of the world's population now live in cities, including over 70% in Europe. Cities bring opportunities but can be unhealthy places to live. The poorest urban dwellers live in the worst environments and are at the greatest risk of poor health outcomes. EURO-URHIS 1 set out to compile a cross-EU inventory of member states use of measures of urban health in order to support policymakers and improve public health policy. Following a literature review to define terms and find an appropriate model to guide urban health research, EURO-URHIS Urban Areas in all EU member states except Luxembourg, as well as Croatia, Turkey, Macedonia, Iceland and Norway, were defined and selected in collaboration with project partners. Following piloting of the survey tool, a the EURO-URHIS 45 data collection tool was sent out to contacts in all countries with identified EUA's, asking for data on 45 Urban Health Indicators (UHI) and 10 other indicators. 60 questionnaires were received from 30 countries, giving information on local health indicator availability, definitions and sources. Telephone interviews were also conducted with 14 respondents about their knowledge of sources of urban health data and barriers or problems experienced when collecting the data. Most participants had little problem identifying the sources of data, though some found that data was not always routinely recorded and was held by diverse sources or not at local level. Some participants found the data collection instrument to not be user-friendly and with UHI definitions that were sometimes unclear. However, the work has demonstrated that urban health and its measurement is of major relevance and importance for Public Health across Europe. The current study has constructed an initial system of European UHIs to meet the objectives of the project, but has also clearly demonstrated that further development work is required. The importance and value of examining UHIs has been confirmed, and the scene has been set for further studies on this topic. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  5. Using Simple Science to Influence Corporate Responsibility—A Lesson from Mercury (Hg)

    NASA Astrophysics Data System (ADS)

    Filippelli, G. M.

    2016-12-01

    Mercury (Hg) is a powerful neurotoxin with wide environmental distribution. Typical population exposure to Hg comes from fish consumption, with fish being the final ecological endpoint of Hg magnification after a series of biogeochemical processes. The emission of Hg from coal-fired power plants has been strongly implicated as a key source of environmental Hg, and thus the target for various public policy initiatives in the US and abroad. We conducted a study of Hg distribution in surface soils over a broad area of central Indiana (US) to understand the major sources of Hg to local fish, and to assess the potential role of policy compliance in reducing Hg. We found a plume-like distribution pattern for soil Hg, with values exceeding 400 ppb Hg in the heart of the plume, and reducing to background concentration of about 30 ppb outside of the plume. The plume covered hundreds of square kilometers, was centered directly over the downtown area of Indianapolis (a city of roughly 1 million inhabitants), and could be roughly backtracked to a source in the southwest corner of the city, coincident with a large coal-fired utility plant that has the highest reported emissions of Hg in the area. Evidence of this link between a local source of Hg and net Hg deposition, with related implications for Hg runoff to local stream, biomagnification to fish, and fish consumption advisories was reported in regional newspapers and eventually published in scientific journals. But importantly, these findings were used by an NGO (the Beyond Coal campaign by Indiana branch of the Sierra Club) at a critical time to influence a decision by the owner of the power plant of whether to comply with the Hg policy rule by either adding higher technology scrubbing technologies to the plant or simply to convert the plant over to natural gas as the fuel source (a costlier choice upfront). The utility chose the latter option, and with the permanent elimination of Hg emissions, the net measurable effects should be lower soil Hg values and lower levels of Hg in fish. This simple science approach creates a local benefit to what is commonly considered a global, and thus seemingly intractable, problem.

  6. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  7. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  8. Finding common ground in large carnivore conservation: mapping contending perspectives

    USGS Publications Warehouse

    Mattson, D.J.; Byrd, K.L.; Rutherford, M.B.; Brown, S.R.; Clark, T.W.

    2006-01-01

    Reducing current conflict over large carnivore conservation and designing effective strategies that enjoy broad public support depend on a better understanding of the values, beliefs, and demands of those who are involved or affected. We conducted a workshop attended by diverse participants involved in conservation of large carnivores in the northern U.S. Rocky Mountains, and used Q methodology to elucidate participant perspectives regarding "problems" and "solutions". Q methodology employs qualitative and quantitative techniques to reveal the subjectivity in any situation. We identified four general perspectives for both problems and solutions, three of which (Carnivore Advocates, Devolution Advocates, and Process Reformers) were shared by participants across domains. Agency Empathizers (problems) and Economic Pragmatists (solutions) were not clearly linked. Carnivore and Devolution Advocates expressed diametrically opposed perspectives that legitimized different sources of policy-relevant information ("science" for Carnivore Advocates and "local knowledge" for Devolution Advocates). Despite differences, we identified potential common ground focused on respectful, persuasive, and creative processes that would build understanding and tolerance. ?? 2006 Elsevier Ltd. All rights reserved.

  9. [Quality of data on deaths from external causes in a medium-sized city in Minas Gerais State, Brazil].

    PubMed

    Melo, Cristiane Magalhães de; Bevilacqua, Paula Dias; Barletto, Marisa; França, Elisabeth Barboza

    2014-09-01

    This study aimed to assess the quality of data on deaths from external causes in Viçosa, Minas Gerais State, Brazil, from 2000 to 2009, and the completeness of the Mortality Information System (SIM). The data were obtained from the SIM of the Municipal Health Department, municipal police enquiries, and local newspaper articles, resulting in a databank with 495 deaths from external causes. The results showed a high proportion of deaths with indeterminate intent (21%) in the SIM, suggesting problems with quality of information. Comparison of data from the SIM and police department detected problems with coverage in the SIM (21%) and thus in the official statistics on mortality from accidents and violence. The results emphasize the importance of searches in other data sources to upgrade the SIM and expand its coverage, and especially the need for studies to identify and analyze problems faced by small and medium-sized cities in the production of mortality data.

  10. Link-prediction to tackle the boundary specification problem in social network surveys

    PubMed Central

    De Wilde, Philippe; Buarque de Lima-Neto, Fernando

    2017-01-01

    Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826

  11. Clinical effectiveness of the obturator externus muscle injection in chronic pelvic pain patients.

    PubMed

    Kim, Shin Hyung; Kim, Do Hyeong; Yoon, Duck Mi; Yoon, Kyung Bong

    2015-01-01

    Because of its anatomical location and function, the obturator externus (OE) muscle can be a source of pain; however, this muscle is understudied as a possible target for therapeutic intervention in pain practice. In this retrospective observational study, we evaluated the clinical effectiveness of the OE muscle injection with a local anesthetic in chronic pelvic pain patients with suspected OE muscle problems. Twenty-three patients with localized tenderness on the inferolateral side of the pubic tubercle accompanied by pain in the groin, anteromedial thigh, or hip were studied. After identifying the OE with contrast dye under fluoroscopic guidance, 5 to 8 mL of 0.3% lidocaine was injected. Pain scores were assessed before and after injection; patient satisfaction was also assessed. Mean pain score decreased by 44.7% (6.6 ± 1.8 to 3.5 ± 0.9, P < 0.001) 2 weeks after OE muscle injection as compared with pain score before injection. In addition, 82% of patients (19 of 23 patients) reported excellent or good satisfaction during 2 weeks after injection. No patients reported complications from OE muscle injection. Fluoroscopy-guided injection of the OE muscle with local anesthetic reduced pain scores and led to a high level of satisfaction at short-term follow-up in patients with suspected OE muscle problem. The results of this study suggest that OE muscle injection may be a valuable therapeutic option for a select group of chronic pelvic pain patients who present with localized tenderness in the OE muscle that is accompanied by groin, anteromedial thigh, or hip pain. © 2013 World Institute of Pain.

  12. Effective trauma center partnerships to address firearm injury: a new paradigm.

    PubMed

    Richmond, Therese S; Schwab, C William; Riely, Jeaneen; Branas, Charles C; Cheney, Rose; Dunfey, Maura

    2004-06-01

    Firearm violence is the second leading cause of injury-related death. This study examined the use of local trauma centers as lead organizations in their communities to address firearm injury. Three trauma centers in cities with populations less than 100,000 were linked with a university-based firearm injury research center. A trauma surgeon director and coordinator partnered with communities, recruited and directed advisory boards, established a local firearm injury surveillance system, and informed communities using community-specific profiles. Primary process and outcome measures included completeness of data, development of community-specific profiles, number of data-driven consumer media pieces, number of meetings to inform policy makers, and an analysis of problems encountered. Local trauma centers in smaller communities implemented a firearm injury surveillance system, produced community-specific injury profiles, and engaged community leaders and policy makers to address firearm injury. Community-specific profiles demonstrated consistent firearm suicide rates (6.58-6.82 per 100,000) but variation in firearm homicide rates (1.08-12.5 per 100,000) across sites. There were 63 data-driven media pieces and 18 forums to inform community leaders and policy makers. Completeness of data elements ranged from 57.1% to 100%. Problems experienced were disconnected data sources, multiple data owners, potential for political fallout, limited trauma center data, skills sets of medical professionals, and sustainability. Trauma centers, when provided resources and support, with the model described, can function as lead organizations in partnering with the community to acquire and use community-specific data for local firearm injury prevention.

  13. A source study of atmospheric polycyclic aromatic hydrocarbons in Shenzhen, South China.

    PubMed

    Liu, Guoqing; Tong, Yongpeng; Luong, John H T; Zhang, Hong; Sun, Huibin

    2010-04-01

    Air pollution has become a serious problem in the Pearl River Delta, South China, particularly in winter due to the local micrometeorology. In this study, atmospheric polycyclic aromatic hydrocarbons (PAHs) were monitored weekly in Shenzhen during the winter of 2006. Results indicated that the detected PAHs were mainly of vapor phase compounds with phenanthrene dominant. The average vapor phase and particle phase PAHs concentration in Shenzhen was 101.3 and 26.7 ng m( - 3), respectively. Meteorological conditions showed great effect on PAH concentrations. The higher PAHs concentrations observed during haze episode might result from the accumulation of pollutants under decreased boundary layer, slower wind speed, and long-term dryness conditions. The sources of PAHs in the air were estimated by principal component analysis in combination with diagnostic ratios. Vehicle exhaust was the major PAHs source in Shenzhen, accounting for 50.0% of the total PAHs emissions, whereas coal combustion and solid waste incineration contributed to 29.4% and 20.6% of the total PAHs concentration, respectively. The results clearly indicated that the increasing solid waste incinerators have become a new important PAHs source in this region.

  14. Machine Learning Seismic Wave Discrimination: Application to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Li, Zefeng; Meier, Men-Andrin; Hauksson, Egill; Zhan, Zhongwen; Andrews, Jennifer

    2018-05-01

    Performance of earthquake early warning systems suffers from false alerts caused by local impulsive noise from natural or anthropogenic sources. To mitigate this problem, we train a generative adversarial network (GAN) to learn the characteristics of first-arrival earthquake P waves, using 300,000 waveforms recorded in southern California and Japan. We apply the GAN critic as an automatic feature extractor and train a Random Forest classifier with about 700,000 earthquake and noise waveforms. We show that the discriminator can recognize 99.2% of the earthquake P waves and 98.4% of the noise signals. This state-of-the-art performance is expected to reduce significantly the number of false triggers from local impulsive noise. Our study demonstrates that GANs can discover a compact and effective representation of seismic waves, which has the potential for wide applications in seismology.

  15. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  16. Associating Fast Radio Bursts with Extragalactic Radio Sources: General Methodology and a Search for a Counterpart to FRB 170107

    NASA Astrophysics Data System (ADS)

    Eftekhari, T.; Berger, E.; Williams, P. K. G.; Blanchard, P. K.

    2018-06-01

    The discovery of a repeating fast radio burst (FRB) has led to the first precise localization, an association with a dwarf galaxy, and the identification of a coincident persistent radio source. However, further localizations are required to determine the nature of FRBs, the sources powering them, and the possibility of multiple populations. Here we investigate the use of associated persistent radio sources to establish FRB counterparts, taking into account the localization area and the source flux density. Due to the lower areal number density of radio sources compared to faint optical sources, robust associations can be achieved for less precise localizations as compared to direct optical host galaxy associations. For generally larger localizations that preclude robust associations, the number of candidate hosts can be reduced based on the ratio of radio-to-optical brightness. We find that confident associations with sources having a flux density of ∼0.01–1 mJy, comparable to the luminosity of the persistent source associated with FRB 121102 over the redshift range z ≈ 0.1–1, require FRB localizations of ≲20″. We demonstrate that even in the absence of a robust association, constraints can be placed on the luminosity of an associated radio source as a function of localization and dispersion measure (DM). For DM ≈1000 pc cm‑3, an upper limit comparable to the luminosity of the FRB 121102 persistent source can be placed if the localization is ≲10″. We apply our analysis to the case of the ASKAP FRB 170107, using optical and radio observations of the localization region. We identify two candidate hosts based on a radio-to-optical brightness ratio of ≳100. We find that if one of these is indeed associated with FRB 170107, the resulting radio luminosity (1029‑ 4 × 1030 erg s‑1 Hz‑1, as constrained from the DM value) is comparable to the luminosity of the FRB 121102 persistent source.

  17. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  18. An impact source localization technique for a nuclear power plant by using sensors of different types.

    PubMed

    Choi, Young-Chul; Park, Jin-Ho; Choi, Kyoung-Sik

    2011-01-01

    In a nuclear power plant, a loose part monitoring system (LPMS) provides information on the location and the mass of a loosened or detached metal impacted onto the inner surface of the primary pressure boundary. Typically, accelerometers are mounted on the surface of a reactor vessel to localize the impact location caused by the impact of metallic substances on the reactor system. However, in some cases, the number of accelerometers is not sufficient to estimate the impact location precisely. In such a case, one of useful methods is to utilize other types of sensor that can measure the vibration of the reactor structure. For example, acoustic emission (AE) sensors are installed on the reactor structure to detect leakage or cracks on the primary pressure boundary. However, accelerometers and AE sensors have a different frequency range. The frequency of interest of AE sensors is higher than that of accelerometers. In this paper, we propose a method of impact source localization by using both accelerometer signals and AE signals, simultaneously. The main concept of impact location estimation is based on the arrival time difference of the impact stress wave between different sensor locations. However, it is difficult to find the arrival time difference between sensors, because the primary frequency ranges of accelerometers and AE sensors are different. To overcome the problem, we used phase delays of an envelope of impact signals. This is because the impact signals from the accelerometer and the AE sensor are similar in the whole shape (envelope). To verify the proposed method, we have performed experiments for a reactor mock-up model and a real nuclear power plant. The experimental results demonstrate that we can enhance the reliability and precision of the impact source localization. Therefore, if the proposed method is applied to a nuclear power plant, we can obtain the effect of additional installed sensors. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  19. Near-Field Noise Source Localization in the Presence of Interference

    NASA Astrophysics Data System (ADS)

    Liang, Guolong; Han, Bo

    In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.

  20. Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.

    NASA Astrophysics Data System (ADS)

    Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.

    2017-12-01

    Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.

  1. Needs and perspectives of air quality improvement in Cracow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wertz, J.

    1995-12-31

    In 1970s and 80s the Cracow province area belonged to the regions of highest concentration of air pollutants throughout Europe. The majority of inhabitants, terrified of continuously worsening conditions of the environment, were of the opinion that this situation was caused by the industrial plants located within the Cracow area (town and/or province) as well as by the advection of pollutants from the neighboring Katowice province - the most industrialized region of Poland. The results of two large measurement series carried out in Cracow in 1984 and 1986 were surprising for the majority of the people. It appeared that 40%more » of the pollution came from local coal-fired boiler houses and household coal-fired stoves. These emission sources, situated at relatively low altitude above the ground level, were called low emission sources. The quantity of such sources has been estimated. It was estimated that the number of local boiler houses was close to 1,600 while the total number of household tile stoves reached 200,000. A full inventory of these sources drawn up in 1989-90 confirmed the quantity of existing boiler houses and the verified total number of tile stoves was 130,000. In 1986, the elimination of low emission sources was admitted to be one of the strategic directions of actions in the field of air quality protection. The following two solutions to this problem were accepted for implementation: (1) boiler house elimination by means of an administrative, compulsory decision, and (2) co-financing or even complete financing from the environmental protection fund, of the capital investment related to the elimination of a boiler house or its conversion to another mode of heating (gas, fuel-oil or connection to the municipal district heating loop). These two solutions are discussed.« less

  2. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  3. [Meeting of calory and protein requirements in developing countries].

    PubMed

    Cremer, H D

    1976-03-01

    While we have definite ideas regarding the requirements for energy and proteins, we have to rely on statistical data whether these requirements are actually met. In spite of the common unreliability of these data, they serve as an useful indicator of the general situation. The supply of nutritional energy represents the average energy requirement for the populations of most developing countries. However, the insufficient supply of high grade protein remains the main nutritional problem of most of these countries. To solve these problems, the following possibilities exist: --Ensuring a sufficient calorie supply so that valuable protein is not wasted for the production of energy --Improving the supply of protein-rich staple food --Supplementing of food with local protien-rich products --Introducing lifestock only when the foodstuff available cannot be used directly for human consumption -- Improving the staple food, introducing higher grade strains of cereals --Producing protein-rich vegetable mixture from local foodstuff --Utilization of synthetic amino acids and of new protein sources. To meet the requirements for calories, and especially for hgih grade protein, is possible only by international and interdisciplinary efforts of all experts concerned with nutrition in any way.

  4. Community perceptions of mental distress in a post-conflict setting: a qualitative study in Burundi.

    PubMed

    Familiar, Itziar; Sharma, Sonali; Ndayisaba, Herman; Munyentwari, Norbert; Sibomana, Seleus; Bass, Judith K

    2013-01-01

    There is scant documentation of the mental health characteristics of low-income communities recovering from armed conflict. To prepare for quantitative health surveys and health service planning in Burundi, we implemented a qualitative study to explore concepts related to mental distress and coping among adults. Mental distress was defined as problems related to feelings, thinking, behaviour and physical stress. Using free listing and key informant interviews with a range of community members, we triangulated data to identify salient issues. Thirty-eight free list respondents and 23 key informants were interviewed in 5 rural communities in Burundi using 2 interview guides from the WHO Toolkit for Mental Health Assessment in Humanitarian Settings. Based on these interviews, we identified four locally defined idioms/terms relating to mental distress: ihahamuka (anxiety spectrum illnesses), ukutiyemera (a mix of depression and anxiety-like syndrome), akabonge (depression/grief-like syndrome) and kwamana ubwoba burengeje (anxiety-like syndrome). Mental distress terms were perceived as important problems impacting community development. Affected individuals sought help from several sources within the community, including community leaders and traditional healers. We discuss how local expressions of distress can be used to tailor health research and service integration from the bottom up.

  5. MIAQuant, a novel system for automatic segmentation, measurement, and localization comparison of different biomarkers from serialized histological slices.

    PubMed

    Casiraghi, Elena; Cossa, Mara; Huber, Veronica; Rivoltini, Licia; Tozzi, Matteo; Villa, Antonello; Vergani, Barbara

    2017-11-02

    In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.

  6. Do Local Contributions Affect the Efficacy of Public Primary Schools?

    ERIC Educational Resources Information Center

    Jimenez, Emmanuel; Paqueo, Vicente

    1996-01-01

    Uses cost, financial sources, and student achievement data from Philippine primary schools (financed primarily from central sources) to discover if financial decentralization leads to more efficient schools. Schools that rely more heavily on local sources (contributions from local school boards, municipal government, parent-teacher associations,…

  7. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    NASA Astrophysics Data System (ADS)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  8. What Rural Women Want the Public Health Community to Know About Access to Healthful Food: A Qualitative Study, 2011.

    PubMed

    Carnahan, Leslie R; Zimmermann, Kristine; Peacock, Nadine R

    2016-04-28

    Living in a rural food desert has been linked to poor dietary habits. Understanding community perspectives about available resources and feasible solutions may inform strategies to improve food access in rural food deserts. The objective of our study was to identify resources and solutions to the food access problems of women in rural, southernmost Illinois. Fourteen focus groups with women (n = 110 participants) in 4 age groups were conducted in a 7-county region as part of a community assessment focused on women's health. We used content analysis with inductive and deductive approaches to explore food access barriers and facilitators. Similar to participants in previous studies, participants in our study reported insufficient local food sources, which they believe contributed to poor dietary habits, high food prices, and the need to travel for healthful food. Participants identified existing local activities and resources that help to increase access, such as home and community gardens, food pantries, and public transportation, as well as local solutions, such as improving nutrition education and public transportation options. Multilevel and collaborative strategies and policies are needed to address food access barriers in rural communities. At the individual level, education may help residents navigate geographic and economic barriers. Community solutions include collaborative strategies to increase availability of healthful foods through traditional and nontraditional food sources. Policy change is needed to promote local agriculture and distribution of privately grown food. Understanding needs and strengths in rural communities will ensure responsive and effective strategies to improve the rural food environment.

  9. What Rural Women Want the Public Health Community to Know About Access to Healthful Food: A Qualitative Study, 2011

    PubMed Central

    Zimmermann, Kristine; Peacock, Nadine R.

    2016-01-01

    Introduction Living in a rural food desert has been linked to poor dietary habits. Understanding community perspectives about available resources and feasible solutions may inform strategies to improve food access in rural food deserts. The objective of our study was to identify resources and solutions to the food access problems of women in rural, southernmost Illinois. Methods Fourteen focus groups with women (n = 110 participants) in 4 age groups were conducted in a 7-county region as part of a community assessment focused on women’s health. We used content analysis with inductive and deductive approaches to explore food access barriers and facilitators. Results Similar to participants in previous studies, participants in our study reported insufficient local food sources, which they believe contributed to poor dietary habits, high food prices, and the need to travel for healthful food. Participants identified existing local activities and resources that help to increase access, such as home and community gardens, food pantries, and public transportation, as well as local solutions, such as improving nutrition education and public transportation options. Conclusion Multilevel and collaborative strategies and policies are needed to address food access barriers in rural communities. At the individual level, education may help residents navigate geographic and economic barriers. Community solutions include collaborative strategies to increase availability of healthful foods through traditional and nontraditional food sources. Policy change is needed to promote local agriculture and distribution of privately grown food. Understanding needs and strengths in rural communities will ensure responsive and effective strategies to improve the rural food environment. PMID:27126555

  10. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  11. How to Manual: How to Update and Enhance Your Local Source Water Protection Assessments

    EPA Pesticide Factsheets

    Describes opportunities for improving source water assessments performed under the Safe Drinking Water Act 1453. It includes: local delineations, potential contaminant source inventories, and susceptibility determinations of source water assessment.

  12. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  13. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  14. On the Wave Equation with Hyperbolic Dynamical Boundary Conditions, Interior and Boundary Damping and Source

    NASA Astrophysics Data System (ADS)

    Vitillaro, Enzo

    2017-03-01

    The aim of this paper is to study the problem u_{tt}-Δ u+P(x,u_t)=f(x,u) quad & in (0,∞)×Ω, u=0 & on (0,∞)× Γ_0, u_{tt}+partial_ν u-Δ_Γ u+Q(x,u_t)=g(x,u)quad & on (0,∞)× Γ_1, u(0,x)=u_0(x),quad u_t(0,x)=u_1(x) & in overline Ω, where {Ω} is a open bounded subset of R^N with C 1 boundary ({N ≥ 2}), {Γ = partialΩ}, {(Γ0,Γ1)} is a measurable partition of {Γ}, {Δ_{Γ}} denotes the Laplace-Beltrami operator on {Γ}, {ν} is the outward normal to {Ω}, and the terms P and Q represent nonlinear damping terms, while f and g are nonlinear subcritical perturbations. In the paper a local Hadamard well-posedness result for initial data in the natural energy space associated to the problem is given. Moreover, when {Ω} is C 2 and {overline{Γ0} \\cap overline{Γ1} = emptyset}, the regularity of solutions is studied. Next a blow-up theorem is given when P and Q are linear and f and g are superlinear sources. Finally a dynamical system is generated when the source parts of f and g are at most linear at infinity, or they are dominated by the damping terms.

  15. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  16. Tracking plastics in the Mediterranean: 2D Lagrangian model.

    PubMed

    Liubartseva, S; Coppini, G; Lecci, R; Clementi, E

    2018-04-01

    Drift of floating debris is studied with a 2D Lagrangian model with stochastic beaching and sedimentation of plastics. An ensemble of >10 10 virtual particles is tracked from anthropogenic sources (coastal human populations, rivers, shipping lanes) to environmental destinations (sea surface, coastlines, seabed). Daily analyses of ocean currents and waves provided by CMEMS at a horizontal resolution of 1/16° are used to force the plastics. High spatio-temporal variability in sea-surface plastic concentrations without any stable long-term accumulations is found. Substantial accumulation of plastics is detected on coastlines and the sea bottom. The most contaminated areas are in the Cilician subbasin, Catalan Sea, and near the Po River Delta. Also, highly polluted local patches in the vicinity of sources with limited circulation are identified. An inverse problem solution, used to quantify the origins of plastics, shows that plastic pollution of every Mediterranean country is caused primarily by its own terrestrial sources. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Acoustic emission source localization based on distance domain signal representation

    NASA Astrophysics Data System (ADS)

    Gawronski, M.; Grabowski, K.; Russek, P.; Staszewski, W. J.; Uhl, T.; Packo, P.

    2016-04-01

    Acoustic emission is a vital non-destructive testing technique and is widely used in industry for damage detection, localisation and characterization. The latter two aspects are particularly challenging, as AE data are typically noisy. What is more, elastic waves generated by an AE event, propagate through a structural path and are significantly distorted. This effect is particularly prominent for thin elastic plates. In these media the dispersion phenomenon results in severe localisation and characterization issues. Traditional Time Difference of Arrival methods for localisation techniques typically fail when signals are highly dispersive. Hence, algorithms capable of dispersion compensation are sought. This paper presents a method based on the Time - Distance Domain Transform for an accurate AE event localisation. The source localisation is found through a minimization problem. The proposed technique focuses on transforming the time signal to the distance domain response, which would be recorded at the source. Only, basic elastic material properties and plate thickness are used in the approach, avoiding arbitrary parameters tuning.

  18. Local public health agency funding: money begets money.

    PubMed

    Bernet, Patrick Michael

    2007-01-01

    Local public health agencies are funded federal, state, and local revenue sources. There is a common belief that increases from one source will be offset by decreases in others, as when a local agency might decide it must increase taxes in response to lowered federal or state funding. This study tests this belief through a cross-sectional study using data from Missouri local public health agencies, and finds, instead, that money begets money. Local agencies that receive more from federal and state sources also raise more at the local level. Given the particular effectiveness of local funding in improving agency performance, these findings that nonlocal revenues are amplified at the local level, help make the case for higher public health funding from federal and state levels.

  19. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  20. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Development of food-based complementary feeding recommendations for 9- to 11-month-old peri-urban Indonesian infants using linear programming.

    PubMed

    Santika, Otte; Fahmida, Umi; Ferguson, Elaine L

    2009-01-01

    Effective population-specific, food-based complementary feeding recommendations (CFR) are required to combat micronutrient deficiencies. To facilitate their formulation, a modeling approach was recently developed. However, it has not yet been used in practice. This study therefore aimed to use this approach to develop CFR for 9- to 11-mo-old Indonesian infants and to identify nutrients that will likely remain low in their diets. The CFR were developed using a 4-phase approach based on linear and goal programming. Model parameters were defined using dietary data collected in a cross-sectional survey of 9- to 11-mo-old infants (n = 100) living in the Bogor District, West-Java, Indonesia and a market survey of 3 local markets. Results showed theoretical iron requirements could not be achieved using local food sources (highest level achievable, 63% of recommendations) and adequate levels of iron, niacin, zinc, and calcium were difficult to achieve. Fortified foods, meatballs, chicken liver, eggs, tempe-tofu, banana, and spinach were the best local food sources to improve dietary quality. The final CFR were: breast-feed on demand, provide 3 meals/d, of which 1 is a fortified infant cereal; > or = 5 servings/wk of tempe/tofu; > or = 3 servings/wk of animal-source foods, of which 2 servings/wk are chicken liver; vegetables, daily; snacks, 2 times/d, including > or = 2 servings/wk of banana; and > or = 4 servings/wk of fortified-biscuits. Results showed that the approach can be used to objectively formulate population-specific CFR and identify key problem nutrients to strengthen nutrition program planning and policy decisions. Before recommending these CFR, their long-term acceptability, affordability, and effectiveness should be assessed.

  2. [Volatile organic compounds of the tap water in the Watarase, Tone and Edo River system].

    PubMed

    Ohmichi, Kimihide; Ohmichi, Masayoshi; Machida, Kazuhiko

    2004-01-01

    The chlorination of river water in purification plants is known to produce carcinogens such as trihalomethanes (THMs). We studied the river system of the Watarase, Tone, and Edo Rivers in regard to the formation of THMs. This river system starts from the base of the Ashio copper mine and ends at Tokyo Bay. Along the rivers, there are 14 local municipalities in Gunma, Saitama, Ibaragi and Chiba Prefectures, as well as Tokyo. This area is the center of the Kanto plain and includes the main sources of water pollution from human activities. We also analyzed various chemicals in river water and tap water to clarify the status of the water environment, and we outline the problems of the water environment in the research area (Fig. 1). Water samples were taken from 18 river sites and 42 water faucets at public facilities in 14 local municipalities. We analyzed samples for volatile organic compounds such as THMs, by gas chromatography mass spectrometry (GC-MS), and evaluations of chemical oxygen demand (COD) were made with reference to Japanese drinking water quality standards. Concentrations of THMs in the downstream tap water samples were higher than those in the samples from the upperstream. This tendency was similar to the COD of the river water samples, but no correlation between the concentration of THMs in tap water and the COD in tap water sources was found. In tap water of local government C, trichloroethylene was detected. The current findings suggest that the present water filtration plant procedures are not sufficient to remove some hazardous chemicals from the source water. Moreover, it was confirmed that the water filtration produced THMs. Also, trichloroethylene was detected from the water environment in the research area, suggesting that pollution of the water environment continues.

  3. Tsunami Early Warning System in Italy and involvement of local communities

    NASA Astrophysics Data System (ADS)

    Tinti, Stefano; Armigliato, Alberto; Zaniboni, Filippo

    2010-05-01

    Italy is characterized by a great coastal extension, and by a series of possible tsunamigenic sources: many active faults, onshore and offshore, also near the shoreline and in shallow water, active volcanoes (Etna, Stromboli, Campi Flegrei for example), continental margins where landslides can occur. All these threats justify the establishment of a tsunami early warning system (TEWS), especially in Southern Italy where most of the sources capable of large disastrous tsunamis are located. One of the main characteristics of such sources, that however is common to other countries in not only in the Mediterranean, is their vicinity to the coast, which means that the tsunami lead time for attacking the coastal system is expected to be within 10-15 minutes in several cases. This constraint of time imposes to conceive and adopt specific plans aiming at a quick tsunami detection and alert dissemination for the TEWS, since obviously the TEWS alert must precede and not follow the tsunami first arrival. The need to be quick introduces the specific problem of uncertainty that is though inherent to any forecast system, but it is a very big issue especially when time available is short, since crucial decisions have to be taken in presence of incomplete data and incomplete processing. This is just the big problem that has to be faced by a system like the a TEWS in Italy. Uncertainties can be reduced by increasing the capabilities of the tsunami monitoring system by densifying the traditional instrumental networks (e.g. by empowering seismic and especially coastal and offshore sea-level observation systems) in the identified tsunamigenic source areas. However, uncertainties, though are expected to have a decreasing trend as time passes after the tsunami initiation, cannot be eliminated and have to be appropriately dealt with: uncertainties lead to under- and overestimation of the tsunami size and arrival times, and to missing or to false alerts, or in other terms they degrade the performance of the tsunami predictors. The role of the local communities in defining the strategies in case of uncertain data is essential: only involvement of such communities since the beginning of the planning and implementation phase of the TEWS as well as in the definition of a decision making matrix can ensure appropriate response in case of emergency, and most importantly, the acceptance of the system in the long run. The efforts to implement the Tsunami Warning System in Italy should take into proper account the above mentioned aspects. Involvement of local communities should be primarily realized through the involvement of the local components of the Civil Protection Agency that is responsible for the implementation of the system over the Italian territory. A pilot project is being conducted in cooperation between the Civil Protection Service of Sicily and the University of Bologna (UNIBO) that contemplates the empowering of the local sea-level monitoring system (TSUNET) and specific vulnerability and risk analyses, also exploiting results of national and European research projects (e.g. TRANSFER and SCHEMA) where UNIBO had a primary role.

  4. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  5. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  6. [Use of blood lead data to evaluate and prevent childhood lead poisoning in Latin America].

    PubMed

    Romieu, Isabelle

    2003-01-01

    Exposure to lead is a widespread and serious threat to the health of children in Latin America. Health officials should monitor sources of exposure and health outcomes to design, implement, and evaluate prevention and control activities. To evaluate the magnitude of lead as a public health problem, three key elements must be defined: I) the potential sources of exposure, 2) the indicators to evaluate health effects and environmental exposure, and 3) the sampling methods for the population at risk. Several strategies can be used to select the study population depending on the study objectives, the time limitations, and the available resources. If the objective is to evaluate the magnitude and sources of the problem, the following sampling methods can be used: I) population-based random sampling; 2) facility-based random sampling within hospitals, daycare centers, or schools; 3) target sampling of high risk groups; 4) convenience sampling of volunteers; and 5) case reporting (which can lead to the identification of populations at risk and sources of exposures). For all sampling methods, information gathering should include the use of a questionnaire to collect general information on the participants and on potential local sources of exposure, as well as the collection of biological samples. In interpreting data, one should consider the type of sampling used and the non-response rates, as well as factors that might influence blood lead measurements, such as age and seasonal variability. Blood lead measurements should be integrated in an overall strategy to prevent lead toxicity in children. The English version of this paper is available at: http://www.insp.mx/salud/index.html.

  7. Acoustic emission based damage localization in composites structures using Bayesian identification

    NASA Astrophysics Data System (ADS)

    Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.

    2017-05-01

    Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.

  8. Informed Source Separation: A Bayesian Tutorial

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2005-01-01

    Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.

  9. Linking Primary Care Information Systems and Public Health Vertical Programs in the Philippines: An Open-source Experience

    PubMed Central

    Tolentino, Herman; Marcelo, Alvin; Marcelo, Portia; Maramba, Inocencio

    2005-01-01

    Community-based primary care information systems are one of the building blocks for national health information systems. In the Philippines, after the devolution of health care to local governments, we observed “health information system islands” connected to national vertical programs being implemented in devolved health units. These structures lead to a huge amount of “information work” in the transformation of health information at the community level. This paper describes work done to develop and implement the open-source Community Based Health Information Tracking System (CHITS) Project, which was implemented to address this information management problem and its outcomes. Several lessons learned from the field as well as software development strategies are highlighted in building community level information systems that link to national level health information systems. PMID:16779052

  10. The solar spectral irradiances from x ray to radio wavelengths

    NASA Technical Reports Server (NTRS)

    White, O. R.

    1993-01-01

    Sources of new measurements of the solar EUV, UV, and visible spectrum are presented together with discussion of formation of the solar spectrum as a problem in stellar atmospheres. Agreement between the data and a modern synthetic spectrum shows that observed radiative variability is a minor perturbation on a photosphere in radiative equilibrium and local thermodynamic equilibrium (LTE). Newly observed solar variability in 1992 defines a magnetic episode on the Sun closely associated with changes in both spectral irradiances and the total irradiance. This episode offers the opportunity to track the relationship between radiation and magnetic flux evolution.

  11. Constraining National Health Care Expenditures. Achieving Quality Care at an Affordable Cost.

    DTIC Science & Technology

    1985-09-30

    Medicaid (federal/state) 10.8 Other state/local government programs 5.1 Other federal programs 5.4 Philanthropy and industrial in-plant 1.2 .- Source: U.S...expenditures. Projections of future outlays and income for the Medicare Trust Fund indicate serious financing problems by the . mid to late 1990’s. The...7.1 0 France 6.4 7.9 23 West Germany 6.4 9.2 44 Italy 6 .1a 6.4 5 Netherlands 6.3 8.2 30 Sweden 7.4 9.8 32 Switzerland n/a 6.9 United Kingdom 4.3 5.2

  12. Control of Groundwater Remediation Process as Distributed Parameter System

    NASA Astrophysics Data System (ADS)

    Mendel, M.; Kovács, T.; Hulkó, G.

    2014-12-01

    Pollution of groundwater requires the implementation of appropriate solutions which can be deployed for several years. The case of local groundwater contamination and its subsequent spread may result in contamination of drinking water sources or other disasters. This publication aims to design and demonstrate control of pumping wells for a model task of groundwater remediation. The task consists of appropriately spaced soil with input parameters, pumping wells and control system. Model of controlled system is made in the program MODFLOW using the finitedifference method as distributed parameter system. Control problem is solved by DPS Blockset for MATLAB & Simulink.

  13. 47 CFR 11.18 - EAS Designations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...

  14. 47 CFR 11.18 - EAS Designations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Designations. (a) National Primary (NP) is a source of EAS Presidential messages. (b) Local Primary (LP) is a... as specified in its EAS Local Area Plan. If it is unable to carry out this function, other LP sources... broadcast stations in the Local Area. (c) State Primary (SP) is a source of EAS State messages. These...

  15. Identifying sources of heterogeneity in capture probabilities: An example using the Great Tit Parus major

    USGS Publications Warehouse

    Senar, J.C.; Conroy, M.J.; Carrascal, L.M.; Domenech, J.; Mozetich, I.; Uribe, F.

    1999-01-01

    Heterogeneous capture probabilities are a common problem in many capture-recapture studies. Several methods of detecting the presence of such heterogeneity are currently available, and stratification of data has been suggested as the standard method to avoid its effects. However, few studies have tried to identify sources of heterogeneity, or whether there are interactions among sources. The aim of this paper is to suggest an analytical procedure to identify sources of capture heterogeneity. We use data on the sex and age of Great Tits captured in baited funnel traps, at two localities differing in average temperature. We additionally use 'recapture' data obtained by videotaping at feeder (with no associated trap), where the tits ringed with different colours were recorded. This allowed us to test whether individuals in different classes (age, sex and condition) are not trapped because of trap shyness or because o a reduced use of the bait. We used logistic regression analysis of the capture probabilities to test for the effects of age, sex, condition, location and 'recapture method. The results showed a higher recapture probability in the colder locality. Yearling birds (either males or females) had the highest recapture prob abilities, followed by adult males, while adult females had the lowest recapture probabilities. There was no effect of the method of 'recapture' (trap or video tape), which suggests that adult females are less often captured in traps no because of trap-shyness but because of less dependence on supplementary food. The potential use of this methodological approach in other studies is discussed.

  16. jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems

    NASA Astrophysics Data System (ADS)

    Belliveau, P. T.; Haber, E.

    2016-12-01

    Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.

  17. Perception and Selection of Information Sources by Undergraduate Students: Effects of Avoidant Style, Confidence, and Personal Control in Problem-Solving

    ERIC Educational Resources Information Center

    Kim, Kyung-Sun; Sin, Sei-Ching Joanna

    2007-01-01

    A survey of undergraduate students examined how students' beliefs about their problem-solving styles and abilities (including avoidant style, confidence, and personal control in problem-solving) influenced their perception and selection of sources, as reflected in (1) perceived characteristics of sources, (2) source characteristics considered…

  18. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  19. Energy and Environmental Issues in Eastern Europe and Central Asia: An Annotated Guide to Information Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gant, K.S.

    2000-10-09

    Energy and environmental problems undermine the potential for sustained economic development and contribute to political and economic instability in the strategically important region surrounding the Caspian and Black Seas. Many organizations supporting efforts to resolve problems in this region have found that consensus building--a prerequisite for action--is a difficult process. Reaching agreement on priorities for investment, technical collaboration, and policy incentives depends upon informed decision-making by governments and local stakeholders. And while vast quantities of data and numerous analyses and reports are more accessible than ever, wading through the many potential sources in search of timely and relevant data ismore » a formidable task. To facilitate more successful data searches and retrieval, this document provides annotated references to over 200 specific information sources, and over twenty primary search engines and data retrieval services, that provide relevant and timely information related to the environment, energy, and economic development around the Caspian and Black Seas. This document is an advance copy of the content that Oak Ridge National Laboratory (ORNL) plans to transfer to the web in HTML format to facilitate interactive search and retrieval of information using standard web-browser software.« less

  20. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  1. Linking Deep Astrometric Standards to the ICRF

    NASA Astrophysics Data System (ADS)

    Frey, S.; Platais, I.; Fey, A. L.

    2007-07-01

    The next-generation large aperature and large field-of-view telescopes will address fundamantal questions of astrophysica and cosmology such as the nature of dark matter and dark energy. For a variety of applications, the CCD mosaic detectors in the focal plane arrays require astronomic calibrationat the milli-arcsecond (mas) level. The existing optical reference frames are insufficient to support such calibrations. To address this problem, deep optical astronomic fields are being established near the Galactic plane. In order to achiev a 5-10-mas or better positional accuracyfor the Deepp Astrometric Standards (DAS), and to obtain bsolute stellar proper motions for the study of Galactic structure, it is crucial to link these fields to the International Celestial Reference Frame (ICRF). To this end, we selected 15 candidate compact extragalactic radio sources in the Gemini-Orion-Taurus (GOT) field. These sources were observed with the European VLBI Network (EVN) at 5 GHz in phase-reference mode. The bright compact calibrator source J0603+2159 and seven other sources were detected and imaged at the angular resolution of -1.5-8 mas. Relative astrometric positions were derived for these sources at a milli-arcsecond accuracy level. The detection of the optical counterparts of these extragalactic radio sources will allow us to establish a direct link to the ICRF locally in the GOT field.

  2. Source to point of use drinking water changes and knowledge, attitude and practices in Katsina State, Northern Nigeria

    NASA Astrophysics Data System (ADS)

    Onabolu, B.; Jimoh, O. D.; Igboro, S. B.; Sridhar, M. K. C.; Onyilo, G.; Gege, A.; Ilya, R.

    In many Sub-Saharan countries such as Nigeria, inadequate access to safe drinking water is a serious problem with 37% in the region and 58% of rural Nigeria using unimproved sources. The global challenge to measuring household water quality as a determinant of safety is further compounded in Nigeria by the possibility of deterioration from source to point of use. This is associated with the use of decentralised water supply systems in rural areas which are not fully reticulated to the household taps, creating a need for an integrated water quality monitoring system. As an initial step towards establishing the system in the north west and north central zones of Nigeria, The Katsina State Rural Water and Sanitation Agency, responsible for ensuring access to safe water and adequate sanitation to about 6 million people carried out a three pronged study with the support of UNICEF Nigeria. Part 1 was an assessment of the legislative and policy framework, institutional arrangements and capacity for drinking water quality monitoring through desk top reviews and Key Informant Interviews (KII) to ascertain the institutional capacity requirements for developing the water quality monitoring system. Part II was a water quality study in 700 households of 23 communities in four local government areas. The objectives were to assess the safety of drinking water, compare the safety at source and household level and assess the possible contributory role of end users’ Knowledge Attitudes and Practices. These were achieved through water analysis, household water quality tracking, KII and questionnaires. Part III was the production of a visual documentary as an advocacy tool to increase awareness of the policy makers of the linkages between source management, treatment and end user water quality. The results indicate that except for pH, conductivity and manganese, the improved water sources were safe at source. However there was a deterioration in water quality between source and point of use in 18%, 12.5%, 27% and 50% of hand pump fitted boreholes, motorised boreholes, hand dug wells and streams respectively. Although no statistical correlation could be drawn between water management practices and water quality deterioration, the survey of the study households gave an indication of the possible contributory role of their knowledge, attitudes and practices to water contamination after provision. Some of the potential water related sources of contamination were poor source protection and location, use of unimproved water source and poor knowledge and practice of household water treatment methods, poor hand washing practices in terms of percentage that wash hands and use soap. Consequently 34 WASH departments have been created at the local government level towards establishment of a community based monitoring system and piloting has begun in Kaita local government area.

  3. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    NASA Astrophysics Data System (ADS)

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  4. Local government funding and financing of roads : Virginia case studies and examples from other states.

    DOT National Transportation Integrated Search

    2014-10-01

    Several Virginia localities have used local funding and financing sources to build new roads or complete major street : improvement projects when state and/or federal funding was not available. Many others have combined local funding sources : with s...

  5. Towards a street-level pollen concentration and exposure forecast

    NASA Astrophysics Data System (ADS)

    van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard

    2015-04-01

    Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.

  6. Detailed Aggregate Resources Study, Dry Lake Valley, Nevada.

    DTIC Science & Technology

    1981-05-29

    LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE I COLLECTED WITHIN A FEW MILES OF CORRESPONDING LEDGE-ROCK SOURCES) SUPPLIED FINE MENS...COMPRESSIVE AND TENSILE STh LEDGE-ROCK SOURCES SUPPLIED COARSE AGGREGATES; LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE COLLECTED WITHIN A FEW

  7. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.

  8. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  9. Early Growth of Black Walnut Trees From Twenty Seed Sources

    Treesearch

    Calvin F. Bey; John R. Toliver; Paul L. Roth

    1971-01-01

    Early results of a black walnut cornseed source study conducted in southern Illinois suggest that seed should be collected from local or south-of-local areas. Trees from southern sources grew faster and longer than trees from northern sources. Trees from southern sources flushed slightly earlier and held their leaves longer than trees from northern sources. For the...

  10. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  11. Hybridization of decomposition and local search for multiobjective optimization.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2014-10-01

    Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.

  12. Increase Economic Valuation of Marine Ecotourism Spots In Small Islands

    NASA Astrophysics Data System (ADS)

    Rahakbauw, Siska D.; Teniwut, Wellem A.; Renjaan, Meiskyana R.; Hungan, Marselus

    2017-10-01

    Ecotourism is one of the fast-growing sectors especially in the developing country as a source of revenue. To get a sustainable development of ecotourism, it needs broad and comprehensive effort from central government and local government, perfect example in that regards in Indonesia is Bali and Lombok. For another area in Indonesia like Kei Islands which located in two administrative governments have a major problem to build a sustainable nature-based tourism because of the location of this area to the major cities in the country makes the travel cost is high. This situation makes the role of local community as the backbone of the growth and development of nature-based tourism is critical. By using structural equation modeling (SEM), we constructed a model to enhance local community perception on economic valuation of ecotourism spots in the area. Results showed that perceived quality as the mediation driven by the intensity of appearance on national television and the internet could increase community attachment to increase willingness to pay from the local community on ecotourism in Kei islands. Also, the result also indicated that WTP value for the local community on ecotourism in Kei Islands was 10.81 per trip, with average trip per month was 1 to 4 times.

  13. Biocultural approaches to well-being and sustainability indicators across scales.

    PubMed

    Sterling, Eleanor J; Filardi, Christopher; Toomey, Anne; Sigouin, Amanda; Betley, Erin; Gazit, Nadav; Newell, Jennifer; Albert, Simon; Alvira, Diana; Bergamini, Nadia; Blair, Mary; Boseto, David; Burrows, Kate; Bynum, Nora; Caillon, Sophie; Caselle, Jennifer E; Claudet, Joachim; Cullman, Georgina; Dacks, Rachel; Eyzaguirre, Pablo B; Gray, Steven; Herrera, James; Kenilorea, Peter; Kinney, Kealohanuiopuna; Kurashima, Natalie; Macey, Suzanne; Malone, Cynthia; Mauli, Senoveva; McCarter, Joe; McMillen, Heather; Pascua, Pua'ala; Pikacha, Patrick; Porzecanski, Ana L; de Robert, Pascale; Salpeteur, Matthieu; Sirikolo, Myknee; Stege, Mark H; Stege, Kristina; Ticktin, Tamara; Vave, Ron; Wali, Alaka; West, Paige; Winter, Kawika B; Jupiter, Stacy D

    2017-12-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and scientists. Sets of indicators that capture both ecological and social-cultural factors, and the feedbacks between them, can underpin cross-scale linkages that help bridge local and global scale initiatives to increase resilience of both humans and ecosystems. Here we argue that biocultural approaches, in combination with methods for synthesizing across evidence from multiple sources, are critical to developing metrics that facilitate linkages across scales and dimensions. Biocultural approaches explicitly start with and build on local cultural perspectives - encompassing values, knowledges, and needs - and recognize feedbacks between ecosystems and human well-being. Adoption of these approaches can encourage exchange between local and global actors, and facilitate identification of crucial problems and solutions that are missing from many regional and international framings of sustainability. Resource managers, scientists, and policymakers need to be thoughtful about not only what kinds of indicators are measured, but also how indicators are designed, implemented, measured, and ultimately combined to evaluate resource use and well-being. We conclude by providing suggestions for translating between local and global indicator efforts.

  14. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  15. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  16. On the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.

    2001-01-01

    The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.

  17. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  18. Electromagnetic system for detection and localization of miners caught in mine accidents

    NASA Astrophysics Data System (ADS)

    Pronenko, Vira; Dudkin, Fedir

    2016-12-01

    The profession of a miner is one of the most dangerous in the world. Among the main causes of fatalities in underground coal mines are the delayed alert of the accident and the lack of information regarding the actual location of the miners after the accident. In an emergency situation (failure or destruction of underground infrastructure), personnel search behind and beneath blockage needs to be performed urgently. However, none of the standard technologies - radio-frequency identification (RFID), Digital Enhanced Cordless Telecommunications (DECT), Wi-Fi, emitting cables, which use the stationary technical devices in mines - provide information about the miners location with the necessary precision. The only technology that is able to provide guaranteed delivery of messages to mine personnel, regardless of their location and under any destruction in the mine, is low-frequency radio technology, which is able to operate through the thickness of rocks even if they are wet. The proposed new system for miner localization is based on solving the inverse problem of determining the magnetic field source coordinates using the data of magnetic field measurements. This approach is based on the measurement of the magnetic field radiated by the miner's responder beacon using two fixed and spaced three-component magnetic field receivers and the inverse problem solution. As a result, a working model of the system for miner's beacon search and localization (MILES - MIner's Location Emergency System) was developed and successfully tested. This paper presents the most important aspects of this development and the results of experimental tests.

  19. Complete Sets of Radiating and Nonradiating Parts of a Source and Their Fields with Applications in Inverse Scattering Limited-Angle Problems

    PubMed Central

    Louis, A. K.

    2006-01-01

    Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060

  20. The Baltimore Youth Ammunition Initiative: A Model Application of Local Public Health Authority in Preventing Gun Violence

    PubMed Central

    Lewin, Nancy L.; Vernick, Jon S.; Beilenson, Peter L.; Mair, Julie S.; Lindamood, Melisa M.; Teret, Stephen P.; Webster, Daniel W.

    2005-01-01

    In 2002, the Baltimore City Health Department, in collaboration with the Baltimore Police Department and the Johns Hopkins Center for Gun Policy and Research, launched the Youth Ammunition Initiative. The initiative addressed Baltimore’s problem of youth gun violence by targeting illegal firearm ammunition sales to the city’s young people. The initiative included undercover “sting” investigations of local businesses and issuance of health department violation and abatement notices. Intermediate results included the passage of 2 Baltimore city council ordinances regulating ammunition sales and reducing the number of outlets eligible to sell ammunition. Although it is too early to assess effects on violent crime, the intervention could theoretically reduce youth violence by interrupting one source of ammunition to youths. More important, the initiative can serve as a policy model for health commissioners seeking to become more active in gun violence prevention efforts. PMID:15855448

  1. The Baltimore Youth Ammunition Initiative: a model application of local public health authority in preventing gun violence.

    PubMed

    Lewin, Nancy L; Vernick, Jon S; Beilenson, Peter L; Mair, Julie S; Lindamood, Melisa M; Teret, Stephen P; Webster, Daniel W

    2005-05-01

    In 2002, the Baltimore City Health Department, in collaboration with the Baltimore Police Department and the Johns Hopkins Center for Gun Policy and Research, launched the Youth Ammunition Initiative. The initiative addressed Baltimore's problem of youth gun violence by targeting illegal firearm ammunition sales to the city's young people. The initiative included undercover "sting" investigations of local businesses and issuance of health department violation and abatement notices. Intermediate results included the passage of 2 Baltimore city council ordinances regulating ammunition sales and reducing the number of outlets eligible to sell ammunition. Although it is too early to assess effects on violent crime, the intervention could theoretically reduce youth violence by interrupting one source of ammunition to youths. More important, the initiative can serve as a policy model for health commissioners seeking to become more active in gun violence prevention efforts.

  2. Navajo coal and air quality in Shiprock, New Mexico

    USGS Publications Warehouse

    Bunnell, Joseph E.; Garcia, Linda V.

    2006-01-01

    Among the Navajo people, high levels of respiratory disease, such as asthma, exist in a population with low rates of cigarette smoking. Air quality outdoors and indoors affects respiratory health. Many Navajo Nation residents burn locally mined coal in their homes for heat, as coal is the most economical energy source. The U.S. Geological Survey and Dine College, in cooperation with the Navajo Division of Health, are conducting a study in the Shiprock, New Mexico, area to determine if indoor use of this coal might be contributing to some of the respiratory health problems experienced by the residents. Researchers in this study will (1) examine respiratory health data, (2) identify stove type and use, (3) analyze samples of coal that are used locally, and (4) measure and characterize air quality inside selected homes. This Fact Sheet summarizes the interim results of the study in both English and Navajo.

  3. Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization

    NASA Astrophysics Data System (ADS)

    Liu, Zexi

    2018-01-01

    Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.

  4. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  5. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  6. Fusion-based multi-target tracking and localization for intelligent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2008-04-01

    In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.

  7. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  8. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    PubMed Central

    Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.

    2016-01-01

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371

  9. Tsunami Risk Management in Pacific Island Countries and Territories (PICTs): Some Issues, Challenges and Ways Forward

    NASA Astrophysics Data System (ADS)

    Dominey-Howes, Dale; Goff, James

    2013-09-01

    The Pacific is well known for producing tsunamis, and events such as the 2011 Tōhoku-oki, Japan disaster demonstrate the vulnerability of coastal communities. We review what is known about the current state of tsunami risk management for Pacific Island countries and territories (PICTs), identify the issues and challenges associated with affecting meaningful tsunami disaster risk reduction (DRR) efforts and outline strategies and possible ways forward. Small island states are scattered across the vast Pacific region and these states have to varying degrees been affected by not only large tsunamis originating in circum-Pacific subduction zones, but also more regionally devastating events. Having outlined and described what is meant by the risk management process, the various problems associated with our current understanding of this process are examined. The poorly understood hazard related to local, regional and distant sources is investigated and the dominant focus on seismic events at the expense of other tsunami source types is noted. We reflect on the challenges of undertaking numerical modelling from generation to inundation and specifically detail the problems as they relate to PICTs. This is followed by an exploration of the challenges associated with mapping exposure and estimating vulnerability in low-lying coastal areas. The latter part of the paper is devoted to exploring what mitigation of the tsunami risk can look like and draw upon good practice cases as exemplars of the actions that can be taken from the local to regional level. Importantly, given the diversity of PICTs, no one approach will suit all places. The paper closes by making a series of recommendations to assist PICTs and the wider tsunami research community in thinking through improvements to their tsunami risk management processes and the research that can underpin these efforts.

  10. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  11. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  12. Staff - David L. LePain | Alaska Division of Geological & Geophysical

    Science.gov Websites

    geothermal energy sources for local use in Alaska: Summary of available information: Alaska Division of fuel and geothermal energy sources for local use in Alaska: Summary of available information: Alaska , J.G., Fossil fuel and geothermal energy sources for local use in Alaska: Summary of available

  13. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  14. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  15. [Psychological and psychiatric problems in cancer patients: relationship to the localization of the disease].

    PubMed

    Moussas, G I; Papadopoulou, A G; Christodoulaki, A G; Karkanias, A P

    2012-01-01

    Cancer may be localized in a variety of areas in the human body. This localization is associated with significant issues concerning not only therapy and prognosis but also psychological and psychiatric problems that the patient may be confronted with. The psychic impact on the patient is determined to a significant degree by the symbolism the affected organ carries. The symbolic significance of a sick body area triggers emotions and sets in motion self-defence mechanisms. In this way, patients deal with the new psychic reality that cancer creates. Therapeutic choices may include interventions, involving mutilation, which cause disfigurement and major consequences in the body image which result in narcissistic injuries. The phenomenology of anxiety and depressive disorders is connected to the affected body area. The appearance of cancer not only in sexual organs but also in other body areas, may disturb sexual function and therefore lead to sexual disorders. Especially, head and neck are connected with vital functions. This area of the body has had a major impact on psychic reality since early life. Complicated psychic functions have developed in relation to organs of the head and neck. Therefore, localization of cancer in this area leads to individual psychological and psychiatric problems, since eating and breathing are harmed, verbal communication becomes difficult and body image alters. Also, increased incidence of alcohol and nicotine abuse in these patients reflects special aspects of psychic structure and personality. Because of severe somatic symptoms and poor prognosis, lung cancer patients feel hopelessness and helplessness. Patients with gynaecological cancer are confronted with a disease that affects organs like breast and internal female sexual organs associated with femininity, attractiveness and fertility. Dietary habits are often a source of guilt for patients who suffer from cancer of the gastrointestinal tract. Additionally, stomas, as colostomy, affect body image and cause feelings of embarrassment with severe consequences on the patient's sense of wellbeing, his or her daily activities, interpersonal relationships or sexuality. Depressive symptoms often occur in prodromal stages of pancreatic cancer. Depression is a common diagnosis in patients with prostate cancer. Prostatectomy negatively affects patient's self-esteem, because it might be experienced as a threat to his sexual life. Disfigurement is related to skin cancer because of both cancer and surgical procedures. Therefore, it is a challenge for modern psycho-oncology to identify those patients who are vulnerable in developing psychiatric symptoms, to early diagnose anxiety and depression and to use psychotherapeutic interventions targeting individual psychological and psychiatric problems in relation to the localization of disease in the human body.

  16. Long-term trends of ambient particulate matter emission source contributions and the accountability of control strategies in Hong Kong over 1998-2008

    NASA Astrophysics Data System (ADS)

    Yuan, Zibing; Yadav, Varun; Turner, Jay R.; Louie, Peter K. K.; Lau, Alexis Kai Hon

    2013-09-01

    Despite extensive emission control measures targeting motor vehicles and to a lesser extent other sources, annual-average PM10 mass concentrations in Hong Kong have remained relatively constant for the past several years and for some air quality metrics, such as the frequency of poor visibility days, conditions have degraded. The underlying drivers for these long-term trends were examined by performing source apportionment on eleven years (1998-2008) of data for seven monitoring sites in the Hong Kong PM10 chemical speciation network. Nine factors were resolved using Positive Matrix Factorization. These factors were assigned to emission source categories that were classified as local (operationally defined as within the Hong Kong Special Administrative Region) or non-local based on temporal and spatial patterns in the source contribution estimates. This data-driven analysis provides strong evidence that local controls on motor vehicle emissions have been effective in reducing motor vehicle-related ambient PM10 burdens with annual-average contributions at neighborhood- and larger-scale monitoring stations decreasing by ˜6 μg m-3 over the eleven year period. However, this improvement has been offset by an increase in annual-average contributions from non-local contributions, especially secondary sulfate and nitrate, of ˜8 μg m-3 over the same time period. As a result, non-local source contributions to urban-scale PM10 have increased from 58% in 1998 to 70% in 2008. Most of the motor vehicle-related decrease and non-local source driven increase occurred over the period 1998-2004 with more modest changes thereafter. Non-local contributions increased most dramatically for secondary sulfate and secondary nitrate factors and thus combustion-related control strategies, including but not limited to power plants, are needed for sources located in the Pearl River Delta and more distant regions to improve air quality conditions in Hong Kong. PMF-resolved source contribution estimates were also used to examine differential contributions of emission source categories during high PM episodes compared to study-average behavior. While contributions from all source categories increased to some extent on high PM days, the increases were disproportionately high for the non-local sources. Thus, controls on emission sources located outside the Hong Kong Special Administrative Region will be needed to effectively decrease the frequency and severity of high PM episodes.

  17. Biomonitoring of pollen grains of a river bank suburban city, Konnagar, Calcutta, India, and its link and impact on local people.

    PubMed

    Ghosal, Kavita; Pandey, Naren; Bhattacharya, Swati Gupta

    2015-01-01

    Pollen grains released by plants are dispersed into the air and can become trapped in human nasal mucosa, causing immediate release of allergens triggering severe Type 1 hypersensitivity reactions in susceptible allergic patients. Recent epidemiologic data show that 11-12% of people suffer from this type of disorders in India. Hence, it is important to examine whether pollen grains have a role in dissipating respiratory problems, including allergy and astma, in a subtropical suburban city. Meteorological data were collected for a period of two years, together with aerobiological sampling with a Burkard sampler. A pollen calendar was prepared for the city. A health survey and the hospitalization rate of local people for the above problems were documented following statistical analysis between pollen counts and the data from the two above-mentioned sources. Skin Prick Test and Indirect ELISA were performer for the identification of allergenic pollen grains. Bio-monitoring results showed that a total of 36 species of pollen grains were located in the air of the study area, where their presence is controlled by many important meteorological parameters proved from SPSS statistical analysis and by their blooming periods. Statistical analysis showed that there is a high positive correlation of monthly pollen counts with the data from the survey and hospital. Biochemical tests revealed the allergic nature of pollen grains of many local species found in the sampler. Bio-monitoring, together with statistical and biochemical results, leave no doubt about the role of pollen as a bio-pollutant. General knowledge about pollen allergy and specific allergenic pollen grains of a particular locality could be a good step towards better health for the cosmopolitan suburban city.

  18. Umbilical cord care in Ethiopia and implications for behavioral change: a qualitative study.

    PubMed

    Amare, Yared

    2014-04-18

    Infections account for up to a half of neonatal deaths in low income countries. The umbilicus is a common source of infection in such settings. This qualitative study investigates practices and perspectives related to umbilical cord care in Ethiopia. In-depth interviews (IDI) were conducted in a district in each of the four most populous regions in the country: Oromia, Amhara, Tigray and Southern Nations, Nationalities and Peoples Region (SNNPR). In each district, one community was purposively selected; and in each study community, IDIs were conducted with 6 mothers, 4 grandmothers, 2 Traditional Birth Attendants and 2 Health Extension Workers (HEWs). The two main questions in the interview guide related to cord care were: How was the umbilical cord cut and tied? Was anything applied to the cord stump immediately after cutting/in the first 7 days? Why was it applied/not applied? The study elucidates local cord care practices and the rational for these practices. Concepts underlying cord tying practices were how to stem blood flow and facilitate delivery of the placenta. Substances were applied on the cord to moisturize it, facilitate its separation and promote healing. Locally recognized cord problems were delayed healing, bleeding or swelling. Few respondents reported familiarity with redness of the cord - a sign of infection. Grandmothers, TBAs and HEWs were influential regarding cord care. This study highlights local rationale for cord practices, concerns about cord related problems and recognition of signs of infection. Behavioral change messages aimed at improving cord care including cleansing with CHX should address these local perspectives. It is suggested that HEWs and health facility staff target mothers, grandmothers, TBAs and other community women with messages and counseling.

  19. Key aspects of coronal heating

    PubMed Central

    Klimchuk, James A.

    2015-01-01

    We highlight 10 key aspects of coronal heating that must be understood before we can consider the problem to be solved. (1) All coronal heating is impulsive. (2) The details of coronal heating matter. (3) The corona is filled with elemental magnetic stands. (4) The corona is densely populated with current sheets. (5) The strands must reconnect to prevent an infinite build-up of stress. (6) Nanoflares repeat with different frequencies. (7) What is the characteristic magnitude of energy release? (8) What causes the collective behaviour responsible for loops? (9) What are the onset conditions for energy release? (10) Chromospheric nanoflares are not a primary source of coronal plasma. Significant progress in solving the coronal heating problem will require coordination of approaches: observational studies, field-aligned hydrodynamic simulations, large-scale and localized three-dimensional magnetohydrodynamic simulations, and possibly also kinetic simulations. There is a unique value to each of these approaches, and the community must strive to coordinate better. PMID:25897094

  20. Evaluation of engineered foods for Closed Ecological Life Support System (CELSS)

    NASA Technical Reports Server (NTRS)

    Karel, M.

    1981-01-01

    A system of conversion of locally regenerated raw materials and of resupplied freeze-dried foods and ingredients into acceptable, safe and nutritious engineered foods is proposed. The first phase of the proposed research has the following objectives: (1) evaluation of feasibility of developing acceptable and reliable engineered foods from a limited selection of plants, supplemented by microbially produced nutrients and a minimum of dehydrated nutrient sources (especially those of animal origin); (2) evaluation of research tasks and specifications of research projects to adapt present technology and food science to expected space conditions (in particular, problems arising from unusual gravity conditions, problems of limited size and the isolation of the food production system, and the opportunities of space conditions are considered); (3) development of scenarios of agricultural production of plant and microbial systems, including the specifications of processing wastes to be recycled.

  1. Prediction of ground effects on aircraft noise

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Wenzel, A. R.; Oncley, P. B.

    1978-01-01

    A unified method is recommended for predicting ground effects on noise. This method may be used in flyover noise predictions and in correcting static test-stand data to free-field conditions. The recommendation is based on a review of recent progress in the theory of ground effects and of the experimental evidence which supports this theory. It is shown that a surface wave must be included sometimes in the prediction method. Prediction equations are collected conveniently in a single section of the paper. Methods of measuring ground impedance and the resulting ground-impedance data are also reviewed because the recommended method is based on a locally reactive impedance boundary model. Current practice of estimating ground effects are reviewed and consideration is given to practical problems in applying the recommended method. These problems include finite frequency-band filters, finite source dimension, wind and temperature gradients, and signal incoherence.

  2. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  3. Modelling of deformation process for the layer of elastoviscoplastic media under surface action of periodic force of arbitrary type

    NASA Astrophysics Data System (ADS)

    Mikheyev, V. V.; Saveliev, S. V.

    2018-01-01

    Description of deflected mode for different types of materials under action of external force plays special role for wide variety of applications - from construction mechanics to circuits engineering. This article con-siders the problem of plastic deformation of the layer of elastoviscolastic soil under surface periodic force. The problem was solved with use of the modified lumped parameters approach which takes into account close to real distribution of normal stress in the depth of the layer along with changes in local mechanical properties of the material taking place during plastic deformation. Special numeric algorithm was worked out for computer modeling of the process. As an example of application suggested algorithm was realized for the deformation of the layer of elasoviscoplastic material by the source of external lateral force with the parameters of real technological process of soil compaction.

  4. Nonaqueous Phase Liquid Dissolution in Porous Media: Multi-Scale Effects of Multi-Component Dissolution Kinetics on Cleanup Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNab, W; Ezzedine, S; Detwiler, R

    2007-02-26

    Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less

  5. Transformational and derivational strategies in analogical problem solving.

    PubMed

    Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute

    2007-03-01

    Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.

  6. CFD and Neutron codes coupling on a computational platform

    NASA Astrophysics Data System (ADS)

    Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.; Scardovelli, R.

    2017-01-01

    In this work we investigate the thermal-hydraulics behavior of a PWR nuclear reactor core, evaluating the power generation distribution taking into account the local temperature field. The temperature field, evaluated using a self-developed CFD module, is exchanged with a neutron code, DONJON-DRAGON, which updates the macroscopic cross sections and evaluates the new neutron flux. From the updated neutron flux the new peak factor is evaluated and the new temperature field is computed. The exchange of data between the two codes is obtained thanks to their inclusion into the computational platform SALOME, an open-source tools developed by the collaborative project NURESAFE. The numerical libraries MEDmem, included into the SALOME platform, are used in this work, for the projection of computational fields from one problem to another. The two problems are driven by a common supervisor that can access to the computational fields of both systems, in every time step, the temperature field, is extracted from the CFD problem and set into the neutron problem. After this iteration the new power peak factor is projected back into the CFD problem and the new time step can be computed. Several computational examples, where both neutron and thermal-hydraulics quantities are parametrized, are finally reported in this work.

  7. A comparison between EEG source localization and fMRI during the processing of emotional visual stimuli

    NASA Astrophysics Data System (ADS)

    Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang

    2007-03-01

    The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.

  8. Community-based adolescent health services in Israel: from theory to practice.

    PubMed

    Wilf-Miron, Rachel; Sikron, Fabienne; Glasser, Saralee; Barell, Vita

    2002-01-01

    Despite their engagement in health-risk behaviors and their health-related concerns, adolescents have the lowest rate of health service utilization of any age group. Time constraints during routine medical encounters generally leave little opportunity for professional screening for health-risk behaviors or for discussing psychosocial problems. In addition, providers express low levels of perceived competency in areas such as sexuality, eating disorders or drug abuse. To address these needs, a walk-in Adolescent Health Service was established by the Sheba Medical Center to provide diagnosis and short-term treatment for individual adolescents, as well as counseling and support for local care providers. A three-way model of cooperation and partnership was developed and implemented. A professional and financial partnership with local authorities were established to help define the particular needs of the community's youth and to improve the ability to reach youth with special health needs. The partnership along with the main medical provider (Kupat Holim Clalit) helped define local health needs, served as a referral source of patients with unmet health needs, and improved the continuity of care. The regional medical center (Sheba Medical Center) provided supervision and consultation for the medical staff of the service, as well as a referral center for patients. It was emphasized that the service staff was intended as a professional source for the primary physician and should not be considered a rival. The core staff included a specialist in adolescent medicine, gynecologist, mental health specialist and social worker. A structured intake procedure was developed for assessing health concerns and problems of adolescents in the context of a community clinic. Findings from the first years of services showed that the first 547 female adolescents demonstrated that a majority of adolescents presented with primary complaints of a somatic nature, while one third were diagnosed with psychosocial problems and one-fifth with a sexuality-related problem. A considerable percentage of those diagnosed with psychosocial or sexuality-related problems had not stated these issues as their "reason for encounter". This additional increment probably represents the contribution of the Health Concern Checklist (HCC), in which the adolescent was asked to mark each item for which she had concerns or would like to receive further information. The HCC can help primary care physicians as well as adolescent medical specialists approach the teenage patient and initiate productive communication. A practical approach to confidential health care for adolescents: The issue of confidentiality has not been sufficiently clarified by Israeli law or by the medical community. The need for confidentiality was strongly felt in the adolescent health service. A policy which provides all adolescents with the opportunity to meet with a physician and receive health guidance or advice at least once, even without parental knowledge or consent, was formulated and implemented. If parental consent was not feasible, the minor was allowed to give informed consent for medical and psychosocial care for himself/herself, with certain limitations.

  9. Deploying Monitoring Trails for Fault Localization in All- Optical Networks and Radio-over-Fiber Passive Optical Networks

    NASA Astrophysics Data System (ADS)

    Maamoun, Khaled Mohamed

    Fault localization is the process of realizing the true source of a failure from a set of collected failure notifications. Isolating failure recovery within the network optical domain is necessary to resolve alarm storm problems. The introduction of the monitoring trail (m-trail) has been proven to deliver better performance by employing monitoring resources in a form of optical trails - a monitoring framework that generalizes all the previously reported counterparts. In this dissertation, the m-trail design is explored and a focus is given to the analysis on using m-trails with established lightpaths to achieve fault localization. This process saves network resources by reducing the number of the m-trails required for fault localization and therefore the number of wavelengths used in the network. A novel approach based on Geographic Midpoint Technique, an adapted version of the Chinese Postman's Problem (CPP) solution and an adapted version of the Traveling Salesman's Problem (TSP) solution algorithms is introduced. The desirable features of network architectures and the enabling of innovative technologies for delivering future millimeter-waveband (mm-WB) Radio-over-Fiber (RoF) systems for wireless services integrated in a Dense Wavelength Division Multiplexing (DWDM) is proposed in this dissertation. For the conceptual illustration, a DWDM RoF system with channel spacing of 12.5 GHz is considered. The mm-WB Radio Frequency (RF) signal is obtained at each Optical Network Unit (ONU) by simultaneously using optical heterodyning photo detection between two optical carriers. The generated RF modulated signal has a frequency of 12.5 GHz. This RoF system is easy, cost-effective, resistant to laser phase noise and also reduces maintenance needs, in principle. A revision of related RoF network proposals and experiments is also included. A number of models for Passive Optical Networks (PON)/ RoF-PON that combine both innovative and existing ideas along with a number of solutions for m-trail design problem of these models are proposed. The comparison between these models uses the expected survivability function which proved that these models are liable to be implemented in the new and existing PON/ RoF-PON systems. This dissertation is followed by recommendation of possible directions for future research in this area.

  10. Cosmic ray injection spectrum at the galactic sources

    NASA Astrophysics Data System (ADS)

    Lagutin, Anatoly; Tyumentsev, Alexander; Volkov, Nikolay

    The spectra of cosmic rays measured at Earth are different from their source spectra. A key to understanding this difference, being crucial for solving the problem of cosmic-ray origin, is the determination of how cosmic-ray (CR) particles propagate through the turbulent interstellar medium (ISM). If the medium is a quasi-homogeneous the propagation process can be described by a normal diffusion model. However, during a last few decades many evidences, both from theory and observations, of the existence of multiscale structures in the Galaxy have been found. Filaments, shells, clouds are entities widely spread in the ISM. In such a highly non-homogeneous (fractal-like) ISM the normal diffusion model certainly is not kept valid. Generalization of this model leads to what is known as "anomalous diffusion". The main goal of the report is to retrieve the cosmic ray injection spectrum at the galactic sources in the framework of the anomalous diffusion (AD) model. The anomaly in this model results from large free paths ("Levy flights") of particles between galactic inhomogeneities. In order to evaluate the CR spectrum at the sources, we carried out new calculation of the CR spectra at Earth. AD equation in terms of fractional derivatives have been used to describe CR propagation from the nearby (r≤1 kpc) young (t≤ 1 Myr) and multiple old distant (r > 1 kpc) sources. The assessment of the key model parameters have been based on the results of the particles diffusion in the cosmic and laboratory plasma. We show that in the framework of the anomalous diffusion model the locally observed basic features of the cosmic rays (difference between spectral exponents of proton, He and other nuclei, "knee" problem, positron to electron ratio) can be explained if the injection spectrum at the main galactic sources of cosmic rays has spectral exponent p˜ 2.85. The authors acknowledge support from The Russian Foundation for Basic Research grant No. 14-02-31524.

  11. Alternative transportation funding sources available to Virginia localities.

    DOT National Transportation Integrated Search

    2006-01-01

    In 2003, the Virginia Department of Transportation developed a list of alternative transportation funding sources available to localities in Virginia. Alternative funding sources are defined as those that are not included in the annual interstate, pr...

  12. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  13. Numerical analysis of a main crack interactions with micro-defects/inhomogeneities using two-scale generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felício B.

    2017-12-01

    Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.

  14. realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array

    NASA Astrophysics Data System (ADS)

    Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.

    2018-05-01

    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.

  15. Optoelectronic microdevices for combined phototherapy

    NASA Astrophysics Data System (ADS)

    Zharov, Vladimir P.; Menyaev, Yulian A.; Hamaev, V. A.; Antropov, G. M.; Waner, Milton

    2000-03-01

    In photomedicine in some of cases radiation delivery to local zones through optical fibers can be changed for the direct placing of tiny optical sources like semiconductor microlasers or light diodes in required zones of ears, nostrils, larynx, nasopharynx cochlea or alimentary tract. Our study accentuates the creation of optoelectronic microdevices for local phototherapy and functional imaging by using reflected light. Phototherapeutic micromodule consist of the light source, microprocessor and miniature optics with different kind of power supply: from autonomous with built-in batteries to remote supply by using pulsed magnetic field and supersmall coils. The developed prototype photomodule has size (phi) 8X16 mm and work duration with built-in battery and light diode up several hours at the average power from several tenths of mW to few mW. Preliminary clinical tests developed physiotherapeutic micrimodules in stomatology for treating the inflammation and in otolaryngology for treating tonsillitis and otitis are presented. The developed implanted electro- optical sources with typical size (phi) 4X0,8 mm and with remote supply were used for optical stimulation of photosensitive retina structure and electrostimulation of visual nerve. In this scheme the superminiature coil with 30 electrical integrated levels was used. Such devices were implanted in eyes of 175 patients with different vision problems during clinical trials in Institute of Eye's Surgery in Moscow. For functional imaging of skin layered structure LED arrays coupled photodiodes arrays were developed. The possibilities of this device for study drug diffusion and visualization small veins are discussed.

  16. An Analysis of Local Education Foundations as Alternative Revenue Streams for Public School Districts

    ERIC Educational Resources Information Center

    Busch, Douglas M.

    2012-01-01

    As school district revenues are reduced by state allocating agencies, local school district administrators and school boards frequently evaluate alternative sources of possible revenue. One emerging source of revenue that many school districts explore is a local education foundation. Local education foundations are 501(c)(3) nonprofit…

  17. Insights and participatory actions driven by a socio-hydrogeological approach for groundwater management: the Grombalia Basin case study (Tunisia)

    NASA Astrophysics Data System (ADS)

    Tringali, C.; Re, V.; Siciliano, G.; Chkir, N.; Tuci, C.; Zouari, K.

    2017-08-01

    Sustainable groundwater management strategies in water-scarce countries need to guide future decision-making processes pragmatically, by simultaneously considering local needs, environmental problems and economic development. The socio-hydrogeological approach named `Bir Al-Nas' has been tested in the Grombalia region (Cap Bon Peninsula, Tunisia), to evaluate the effectiveness of complementing hydrogeochemical and hydrogeological investigations with the social dimension of the issue at stake (which, in this case, is the identification of groundwater pollution sources). Within this approach, the social appraisal, performed through social network analysis and public engagement of water end-users, allowed hydrogeologists to get acquainted with the institutional dimension of local groundwater management, identifying issues, potential gaps (such as weak knowledge transfer among concerned stakeholders), and the key actors likely to support the implementation of the new science-based management practices resulting from the ongoing hydrogeological investigation. Results, hence, go beyond the specific relevance for the Grombaila basin, showing the effectiveness of the proposed approach and the importance of including social assessment in any given hydrogeological research aimed at supporting local development through groundwater protection measures.

  18. DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon

    NASA Astrophysics Data System (ADS)

    Zhai, Hui; Zhang, Xiaofei; Zheng, Wang

    2018-05-01

    We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.

  19. The relative importance of sources of greenhouse-gas emissions: comparison of global through subnational perspectives.

    PubMed

    Cushman, Robert M; Jones, Sonja B

    2002-03-01

    Increasing atmospheric concentrations of greenhouse gases are widely expected to cause global warming and other climatic changes. It is important to establish priorities for reducing greenhouse-gas emissions, so that resources can be allocated efficiently and effectively. This is a global problem, and it is possible, on a global scale, to identify those activities whose emissions have the greatest potential for enhancing the greenhouse effect. However, perspectives from smaller scales must be appreciated, because it is on scales down to the local level that response measures will be implemented. This paper analyzes the relative importance of emissions from the many individual sources, on scales ranging from global to national to subnational. Individual country perspectives and proposed policy measures and those of subnational political entities exhibit some commonalities but differ among themselves and from a global-scale perspective in detail.

  20. An FBG acoustic emission source locating system based on PHAT and GA

    NASA Astrophysics Data System (ADS)

    Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun

    2017-09-01

    Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.

  1. The TORSED method for construction of TORT boundary sources from external DORT flux files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhoades, W.A.

    1993-08-01

    The TORSED method provides a means of coupling cylindrical two-dimensional DORT fluxes or fluences to a three-dimensional TORT calculation in Cartesian geometry through construction of external boundary sources for TORT. This can be important for several reasons. The two-dimensional environment may be too large for TORT simulation. The two-dimensional environment may be truly cylindrical in nature, and thus, better treated in that geometry. It may be desired to use a single environment calculation to study numerous local perturbations. In Section I the TORSED code is described in detail and the diverse demonstration problems that accompany the code distribution are discussed.more » In Section II, an updated discussion of the VISA code is given. VISA is required to preprocess the DORT files for use in TORSED. In Section III, the references are listed.« less

  2. Numerical simulation of ultrasound-thermotherapy combining nonlinear wave propagation with broadband soft-tissue absorption.

    PubMed

    Ginter, S

    2000-07-01

    Ultrasound (US) thermotherapy is used to treat tumours, located deep in human tissue, by heat. It features by the application of high intensity focused ultrasound (HIFU), high local temperatures of about 90 degrees C and short treating time of a few seconds. Dosage of the therapy remains a problem. To get it under control, one has to know the heat source, i.e. the amount of absorbed US power, which shows nonlinear influences. Therefore, accurate simulations are essential. In this paper, an improved simulation model is introduced which enables accurate investigations of US thermotherapy. It combines nonlinear US propagation effects, which lead to generation of higher harmonics, with a broadband frequency-power law absorption typical for soft tissue. Only the combination of both provides a reliable calculation of the generated heat. Simulations show the influence of nonlinearities and broadband damping for different source signals on the absorbed US power density distribution.

  3. Submicron x-ray diffraction and its applications to problems in materials and environmental science

    NASA Astrophysics Data System (ADS)

    Tamura, N.; Celestre, R. S.; MacDowell, A. A.; Padmore, H. A.; Spolenak, R.; Valek, B. C.; Meier Chang, N.; Manceau, A.; Patel, J. R.

    2002-03-01

    The availability of high brilliance third generation synchrotron sources together with progress in achromatic focusing optics allows us to add submicron spatial resolution to the conventional century-old x-ray diffraction technique. The new capabilities include the possibility to map in situ, grain orientations, crystalline phase distribution, and full strain/stress tensors at a very local level, by combining white and monochromatic x-ray microbeam diffraction. This is particularly relevant for high technology industry where the understanding of material properties at a microstructural level becomes increasingly important. After describing the latest advances in the submicron x-ray diffraction techniques at the Advanced Light Source, we will give some examples of its application in material science for the measurement of strain/stress in metallic thin films and interconnects. Its use in the field of environmental science will also be discussed.

  4. Sustainable engineered processes to mitigate the global arsenic crisis in drinking water: challenges and progress.

    PubMed

    Sarkar, Sudipta; Greenleaf, John E; Gupta, Anirban; Uy, Davin; Sengupta, Arup K

    2012-01-01

    Millions of people around the world are currently living under the threat of developing serious health problems owing to ingestion of dangerous concentrations of arsenic through their drinking water. In many places, treatment of arsenic-contaminated water is an urgent necessity owing to a lack of safe alternative sources. Sustainable production of arsenic-safe water from an arsenic-contaminated raw water source is currently a challenge. Despite the successful development in the laboratory of technologies for arsenic remediation, few have been successful in the field. A sustainable arsenic-remediation technology should be robust, composed of local resources, and user-friendly as well as must attach special consideration to the social, economic, cultural, traditional, and environmental aspects of the target community. One such technology is in operation on the Indian subcontinent. Wide-scale replication of this technology with adequate improvisation can solve the arsenic crisis prevalent in the developing world.

  5. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  6. Bayesian focalization: quantifying source localization with environmental uncertainty.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2007-05-01

    This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.

  7. Measurement and modeling of the acoustic field near an underwater vehicle and implications for acoustic source localization.

    PubMed

    Lepper, Paul A; D'Spain, Gerald L

    2007-08-01

    The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.

  8. A novel method for transient detection in high-cadence optical surveys. Its application for a systematic search for novae in M 31

    NASA Astrophysics Data System (ADS)

    Soraisam, Monika D.; Gilfanov, Marat; Kupfer, Thomas; Masci, Frank; Shafter, Allen W.; Prince, Thomas A.; Kulkarni, Shrinivas R.; Ofek, Eran O.; Bellm, Eric

    2017-03-01

    Context. In the present era of large-scale surveys in the time domain, the processing of data, from procurement up to the detection of sources, is generally automated. One of the main challenges in the astrophysical analysis of their output is contamination by artifacts, especially in the regions of high surface brightness of unresolved emission. Aims: We present a novel method for identifying candidates for variable and transient sources from the outputs of optical time-domain survey data pipelines. We use the method to conduct a systematic search for novae in the intermediate Palomar Transient Factory (iPTF) observations of the bulge part of M 31 during the second half of 2013. Methods: We demonstrate that a significant fraction of artifacts produced by the iPTF pipeline form a locally uniform background of false detections approximately obeying Poissonian statistics, whereas genuine variable and transient sources, as well as artifacts associated with bright stars, result in clusters of detections whose spread is determined by the source localization accuracy. This makes the problem analogous to source detection on images produced by grazing incidence X-ray telescopes, enabling one to utilize the arsenal of powerful tools developed in X-ray astronomy. In particular, we use a wavelet-based source detection algorithm from the Chandra data analysis package CIAO. Results: Starting from 2.5 × 105 raw detections made by the iPTF data pipeline, we obtain approximately 4000 unique source candidates. Cross-matching these candidates with the source-catalog of a deep reference image of the same field, we find counterparts for 90% of the candidates. These sources are either artifacts due to imperfect PSF matching or genuine variable sources. The remaining approximately 400 detections are transient sources. We identify novae among these candidates by applying selection cuts to their lightcurves based on the expected properties of novae. Thus, we recovered all 12 known novae (not counting one that erupted toward the end of the survey) registered during the time span of the survey and discovered three nova candidates. Our method is generic and can be applied to mining any target out of the artifacts in optical time-domain data. As it is fully automated, its incompleteness can be accurately computed and corrected for.

  9. Neurophysiological mechanisms of emotion regulation for subtypes of externalizing children.

    PubMed

    Stieben, Jim; Lewis, Marc D; Granic, Isabela; Zelazo, Philip David; Segalowitz, Sidney; Pepler, Debra

    2007-01-01

    Children referred for externalizing behavior problems may not represent a homogeneous population. Our objective was to assess neural mechanisms of emotion regulation that might distinguish subtypes of externalizing children from each other and from their normal age mates. Children with pure externalizing (EXT) problems were compared with children comorbid for externalizing and internalizing (MIXED) problems and with age-matched controls. Only boys were included in the analysis because so few girls were referred for treatment. We used a go/no-go task with a negative emotion induction, and we examined dense-array EEG data together with behavioral measures of performance. We investigated two event-related potential (ERP) components tapping inhibitory control or self-monitoring - the inhibitory N2 and error-related negativity (ERN) - and we constructed source models estimating their cortical generators. The MIXED children's N2s increased in response to the emotion induction, resulting in greater amplitudes than EXT children in the following trial block. ERN amplitudes were greatest for control children and smallest for EXT children with MIXED children in between, but only prior to the emotion induction. These results were paralleled by behavioral differences in response time and performance monitoring. ERP activity was localized to cortical sources suggestive of the dorsal anterior cingulate for control children, posterior cingulate areas for the EXT children, and both posterior cingulate and ventral cingulate/prefrontal regions for the MIXED children. These findings highlight different mechanisms of self-regulation underlying externalizing subtypes and point toward distinct developmental pathways and treatment strategies.

  10. Rural drinking water issues in India’s drought-prone area: a case of Maharashtra state

    NASA Astrophysics Data System (ADS)

    Udmale, Parmeshwar; Ichikawa, Yutaka; Nakamura, Takashi; Shaowei, Ning; Ishidaira, Hiroshi; Kazama, Futaba

    2016-07-01

    Obtaining sufficient drinking water with acceptable quality under circumstances of lack, such as droughts, is a challenge in drought-prone areas of India. This study examined rural drinking water availability issues during a recent drought (2012) through 22 focus group discussions (FGDs) in a drought-prone catchment of India. Also, a small chemical water quality study was undertaken to evaluate the suitability of water for drinking purpose based on Bureau of Indian Standards (BIS). The drought that began in 2011 and further deteriorated water supplies in 2012 caused a rapid decline in reservoir storages and groundwater levels that led, in turn, to the failure of the public water supply systems in the Upper Bhima Catchment. Dried up and low-yield dug wells and borewells, tanker water deliveries from remote sources, untimely water deliveries, and degraded water quality were the major problems identified in the FGDs. In addition to severe drinking water scarcity during drought, the quality of the drinking water was found to be a major problem, and it apparently was neglected by local governments and users. Severe contamination of the drinking water with nitrate-nitrogen, ammonium-nitrogen, and chlorides was found in the analyzed drinking water samples. Hence, in addition to the water scarcity, the results of this study point to an immediate need to investigate the problem of contaminated drinking water sources while designing relief measures for drought-prone areas of India.

  11. Environmental health impacts of tobacco farming: a review of the literature.

    PubMed

    Lecours, Natacha; Almeida, Guilherme E G; Abdallah, Jumanne M; Novotny, Thomas E

    2012-03-01

    To review the literature on environmental health impacts of tobacco farming and to summarise the findings and research gaps in this field. A standard literature search was performed using multiple electronic databases for identification of peer-reviewed articles. The internet and organisational databases were also used to find other types of documents (eg, books and reports). The reference lists of identified relevant documents were reviewed to find additional sources. The selected studies documented many negative environmental impacts of tobacco production at the local level, often linking them with associated social and health problems. The common agricultural practices related to tobacco farming, especially in low-income and middle-income countries, lead to deforestation and soil degradation. Agrochemical pollution and deforestation in turn lead to ecological disruptions that cause a loss of ecosystem services, including land resources, biodiversity and food sources, which negatively impact human health. Multinational tobacco companies' policies and practices contribute to environmental problems related to tobacco leaf production. Development and implementation of interventions against the negative environmental impacts of tobacco production worldwide are necessary to protect the health of farmers, particularly in low-income and middle-income countries. Transitioning these farmers out of tobacco production is ultimately the resolution to this environmental health problem. In order to inform policy, however, further research is needed to better quantify the health impacts of tobacco farming and evaluate the potential alternative livelihoods that may be possible for tobacco farmers globally.

  12. Supernova Relic Neutrinos and the Supernova Rate Problem: Analysis of Uncertainties and Detectability of ONeMg and Failed Supernovae

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Hidaka, Jun; Kajino, Toshitaka; Suzuki, Jyutaro

    2014-08-01

    Direct measurements of the core collapse supernova rate (R SN) in the redshift range 0 <= z <= 1 appear to be about a factor of two smaller than the rate inferred from the measured cosmic massive star formation rate (SFR). This discrepancy would imply that about one-half of the massive stars that have been born in the local observed comoving volume did not explode as luminous supernovae. In this work, we explore the possibility that one could clarify the source of this "supernova rate problem" by detecting the energy spectrum of supernova relic neutrinos with a next generation 106 ton water Čerenkov detector like Hyper-Kamiokande. First, we re-examine the supernova rate problem. We make a conservative alternative compilation of the measured SFR data over the redshift range 0 <=z <= 7. We show that by only including published SFR data for which the dust obscuration has been directly determined, the ratio of the observed massive SFR to the observed supernova rate R SN has large uncertainties {\\sim }1.8^{+1.6}_{-0.6} and is statistically consistent with no supernova rate problem. If we further consider that a significant fraction of massive stars will end their lives as faint ONeMg SNe or as failed SNe leading to a black hole remnant, then the ratio reduces to {\\sim }1.1^{+1.0}_{-0.4} and the rate problem is essentially solved. We next examine the prospects for detecting this solution to the supernova rate problem. We first study the sources of uncertainty involved in the theoretical estimates of the neutrino detection rate and analyze whether the spectrum of relic neutrinos can be used to independently identify the existence of a supernova rate problem and its source. We consider an ensemble of published and unpublished core collapse supernova simulation models to estimate the uncertainties in the anticipated neutrino luminosities and temperatures. We illustrate how the spectrum of detector events might be used to establish the average neutrino temperature and constrain SN models. We also consider supernova ν-process nucleosynthesis to deduce constraints on the temperature of the various neutrino flavors. We study the effects of neutrino oscillations on the detected neutrino energy spectrum and also show that one might distinguish the equation of state (EoS) as well as the cause of the possible missing luminous supernovae from the detection of supernova relic neutrinos. We also analyze a possible enhanced contribution from failed supernovae leading to a black hole remnant as a solution to the supernova rate problem. We conclude that indeed it might be possible (though difficult) to measure the neutrino temperature, neutrino oscillations, and the EoS and confirm this source of missing luminous supernovae by the detection of the spectrum of relic neutrinos.

  13. Supernova relic neutrinos and the supernova rate problem: Analysis of uncertainties and detectability of ONeMg and failed supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathews, Grant J.; Hidaka, Jun; Kajino, Toshitaka

    2014-08-01

    Direct measurements of the core collapse supernova rate (R{sub SN}) in the redshift range 0 ≤ z ≤ 1 appear to be about a factor of two smaller than the rate inferred from the measured cosmic massive star formation rate (SFR). This discrepancy would imply that about one-half of the massive stars that have been born in the local observed comoving volume did not explode as luminous supernovae. In this work, we explore the possibility that one could clarify the source of this 'supernova rate problem' by detecting the energy spectrum of supernova relic neutrinos with a next generation 10{supmore » 6} ton water Čerenkov detector like Hyper-Kamiokande. First, we re-examine the supernova rate problem. We make a conservative alternative compilation of the measured SFR data over the redshift range 0 ≤z ≤ 7. We show that by only including published SFR data for which the dust obscuration has been directly determined, the ratio of the observed massive SFR to the observed supernova rate R{sub SN} has large uncertainties ∼1.8{sub −0.6}{sup +1.6} and is statistically consistent with no supernova rate problem. If we further consider that a significant fraction of massive stars will end their lives as faint ONeMg SNe or as failed SNe leading to a black hole remnant, then the ratio reduces to ∼1.1{sub −0.4}{sup +1.0} and the rate problem is essentially solved. We next examine the prospects for detecting this solution to the supernova rate problem. We first study the sources of uncertainty involved in the theoretical estimates of the neutrino detection rate and analyze whether the spectrum of relic neutrinos can be used to independently identify the existence of a supernova rate problem and its source. We consider an ensemble of published and unpublished core collapse supernova simulation models to estimate the uncertainties in the anticipated neutrino luminosities and temperatures. We illustrate how the spectrum of detector events might be used to establish the average neutrino temperature and constrain SN models. We also consider supernova ν-process nucleosynthesis to deduce constraints on the temperature of the various neutrino flavors. We study the effects of neutrino oscillations on the detected neutrino energy spectrum and also show that one might distinguish the equation of state (EoS) as well as the cause of the possible missing luminous supernovae from the detection of supernova relic neutrinos. We also analyze a possible enhanced contribution from failed supernovae leading to a black hole remnant as a solution to the supernova rate problem. We conclude that indeed it might be possible (though difficult) to measure the neutrino temperature, neutrino oscillations, and the EoS and confirm this source of missing luminous supernovae by the detection of the spectrum of relic neutrinos.« less

  14. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  15. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  16. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  17. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  18. Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.

    PubMed

    Hansen, Sofie Therese; Hansen, Lars Kai

    2017-03-01

    Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  20. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  1. A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles

    DTIC Science & Technology

    1994-05-02

    AD-A282 787 " A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles Alonzo Kelly CMU-RI-TR-94-17 The Robotics...follow, or a direction to prefer, it cannot generate its own strategic goals. Therefore, it solves the local planning problem for autonomous vehicles . The... autonomous vehicles . It is intelligent because it uses range images that are generated from either a laser rangefinder or a stereo triangulation

  2. [Simulation of CO2 exchange between forest canopy and atmosphere].

    PubMed

    Diao, Yiwei; Wang, Anzhi; Jin, Changjie; Guan, Dexin; Pei, Tiefan

    2006-12-01

    Estimating the scalar source/sink distribution of CO2 and its vertical fluxes within and above forest canopy continues to be a critical research problem in biosphere-atmosphere exchange processes and plant ecology. With broad-leaved Korean pine forest in Changbai Mountains as test object, and based on Raupach's localized near field theory, the source/sink and vertical flux distribution of CO2 within and above forest canopy were modeled through an inverse Lagrangian dispersion analysis. This model correctly predicted a strong positive CO2 source strength in the deeper layers of the canopy due to soil-plant respiration, and a strong CO2 sink in the upper layers of the canopy due to the assimilation by sunlit foliage. The foliage in the top layer of canopy changed from a CO2 source in the morning to a CO2 sink in the afternoon, while the soil constituted a strong CO2 source all the day. The simulation results accorded well with the eddy covariance CO2 flux measurements within and above the canopy, and the average precision was 89%. The CO2 exchange predicted by the analysis was averagely 15% higher than that of the eddy correlation, but exhibited identical temporal trend. Atmospheric stability remarkably affected the CO2 exchange between forest canopy and atmosphere.

  3. European youth care sites serve different populations of adolescents with cannabis use disorder. Baseline and referral data from the INCANT trial.

    PubMed

    Phan, Olivier; Henderson, Craig E; Angelidis, Tatiana; Weil, Patricia; van Toorn, Manja; Rigter, Renske; Soria, Cecilia; Rigter, Henk

    2011-07-12

    MDFT (Multidimensional Family Therapy) is a family based outpatient treatment programme for adolescent problem behaviour. MDFT has been found effective in the USA in adolescent samples differing in severity and treatment delivery settings. On request of five governments (Belgium, France, Germany, the Netherlands, and Switzerland), MDFT has now been tested in the joint INCANT trial (International Cannabis Need of Treatment) for applicability in Western Europe. In each of the five countries, study participants were recruited from the local population of youth seeking or guided to treatment for, among other things, cannabis use disorder. There is little information in the literature if these populations are comparable between sites/countries or not. Therefore, we examined if the study samples enrolled in the five countries differed in baseline characteristics regarding demographics, clinical profile, and treatment delivery setting. INCANT was a multicentre phase III(b) randomized controlled trial with an open-label, parallel group design. It compared MDFT with treatment as usual (TAU) at and across sites in Berlin, Brussels, Geneva, The Hague and Paris.Participants of INCANT were adolescents of either sex, from 13 through 18 years of age, with a cannabis use disorder (dependence or abuse), and at least one parent willing to take part in the treatment. In total, 450 cases/families were randomized (concealed) into INCANT. We collected data about adolescent and family demographics (age, gender, family composition, school, work, friends, and leisure time). In addition, we gathered data about problem behaviour (substance use, alcohol and cannabis use disorders, delinquency, psychiatric co-morbidity).There were no major differences on any of these measures between the treatment conditions (MDFT and TAU) for any of the sites. However, there were cross-site differences on many variables. Most of these could be explained by variations in treatment culture, as reflected by referral policy, i.e., participants' referral source. We distinguished 'self-determined' referral (common in Brussels and Paris) and referral with some authority-related 'external' coercion (common in Geneva and The Hague). The two referral types were more equally divided in Berlin. Many cross-site baseline differences disappeared when we took referral source into account, but not all. A multisite trial has the advantage of being efficient, but it also carries risks, the most important one being lack of equivalence between local study populations. Our site populations differed in many respects. This is not a problem for analyses and interpretations if the differences somehow can be accounted for. To a major extent, this appeared possible in INCANT. The most important factor underlying the cross-site variations in baseline characteristics was referral source. Correcting for referral source made most differences disappear. Therefore, we will use referral source as a covariate accounting for site differences in future INCANT outcome analyses. ISRCTN: ISRCTN51014277. © 2011 Phan et al; licensee BioMed Central Ltd.

  4. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  5. Nitrous acid in a street canyon environment: sources and the contribution to local oxidation capacity

    NASA Astrophysics Data System (ADS)

    Yun, Hui; Wang, Zhe; Zha, Qiaozhi; Wang, Weihao; Xue, Likun; Zhang, Li; Li, Qinyi; Cui, Long; Lee, Shuncheng; Poon, Steven; Wang, Tao

    2017-04-01

    Nitrous acid (HONO) is one of the dominant sources of hydroxyl radical (OH) and plays an important role in photochemical oxidation processes in the atmosphere. Even though HONO has been extensively studied in urban areas, its importance and effects in street canyon microenvironment has not been thoroughly investigated. Street canyons which suffer serious air pollution problem are widely distributed in downtown areas with paralleled high buildings and narrow roads in the center. In this study, we measured HONO at a roadside of a street canyon in urban Hong Kong and applied an observation-based box model based on Master Chemical Mechanism (MCM 3.3) to investigate the contribution of HONO to local oxidation chemistry. Higher HONO mixing ratios were observed in the daytime than in the nighttime. An average emission ratio (ΔHONO/ΔNOx) of 1.0% (±0.5%) was derived at this roadside site and the direct HONO emission from vehicles contributed to 38% of the measured HONO in the street canyon. Heterogeneous NO2 conversion on humid ground or building surfaces and the uptake of NO2 on fresh soot surfaces were the other two important HONO sources in this microenvironment. OBM simulations constrained with observed HONO showed that the peak concentration of OH, HO2 and RO2 is 7.9, 5.0 and 7.5 times of the result in the case with only OH+NO as HONO source. Photolysis of HONO contributed to 86.5% of the total primary radical production rates and can lead to efficient NO2 and O3 production under the condition of weak regional O3 transport. Our study suggests that HONO could significantly increase the atmospheric oxidation capacity in a street canyon which may impact the secondary formation of aerosols and OVOCs.

  6. Problems, pitfalls and probes: Welcome to the jungle of electrochemical noise technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edgemon, G.L.

    1998-02-19

    The rise in electrochemical noise (EN) as a corrosion monitoring technique has resulted in unique problems associated with the field application of this method. Many issues relate to the design of the EN probe electrodes. The ability of an electrochemical noise monitoring system to identify and discriminate between localized corrosion mechanisms is related primarily to the capability of the probe to separate the corrosion cell anode from the corresponding cathode. Effectiveness of this separation is largely determined by the details of and the proper design of the probe that is in the environment of interest. No single probe design ormore » geometry can be effectively use in every situation to monitor all types of corrosion. In this paper the authors focus on a case study and probe development history related to monitoring corrosion in an extremely hostile environment using EN. While the ultimate application of EN was and continues to be successful, the case study shows that patience and persistence was necessary to meet and properly implement the monitoring program. Other possible source of problems and frustration with implementing EN are also discussed.« less

  7. Simulated annealing two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Velis, Danilo R.; Ulrych, Tadeusz J.

    We present a new method for solving the two-point seismic ray tracing problem based on Fermat's principle. The algorithm overcomes some well known difficulties that arise in standard ray shooting and bending methods. Problems related to: (1) the selection of new take-off angles, and (2) local minima in multipathing cases, are overcome by using an efficient simulated annealing (SA) algorithm. At each iteration, the ray is propagated from the source by solving a standard initial value problem. The last portion of the raypath is then forced to pass through the receiver. Using SA, the total traveltime is then globally minimized by obtaining the initial conditions that produce the absolute minimum path. The procedure is suitable for tracing rays through 2D complex structures, although it can be extended to deal with 3D velocity media. Not only direct waves, but also reflected and head-waves can be incorporated in the scheme. One important advantage is its simplicity, in as much as any available or user-preferred initial value solver system can be used. A number of clarifying examples of multipathing in 2D media are examined.

  8. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jingjing; Zhang, Bin; Reyes, Juvenal

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less

  9. Reduction of atmospheric fine particle level by restricting the idling vehicles around a sensitive area.

    PubMed

    Lee, Yen-Yi; Lin, Sheng-Lun; Yuan, Chung-Shin; Lin, Ming-Yeng; Chen, Kang-Shin

    2018-07-01

    Atmospheric particles are a major problem that could lead to harmful effects on human health, especially in densely populated urban areas. Chiayi is a typical city with very high population and traffic density, as well as being located at the downwind side of several pollution sources. Multiple contributors for PM 2.5 (particulate matter with an aerodynamic diameter ≥2.5 μm) and ultrafine particles cause complicated air quality problems. This study focused on the inhibition of local emission sources by restricting the idling vehicles around a school area and evaluating the changes in surrounding atmospheric PM conditions. Two stationary sites were monitored, including a background site on the upwind side of the school and a campus site inside the school, to monitor the exposure level, before and after the idling prohibition. In the base condition, the PM 2.5  mass concentrations were found to increase 15% from the background, whereas the nitrate (NO 3 - ) content had a significant increase at the campus site. The anthropogenic metal contents in PM 2.5 were higher at the campus site than the background site. Mobile emissions were found to be the most likely contributor to the school hot spot area by chemical mass balance modeling (CMB8.2). On the other hand, the PM 2.5 in the school campus fell to only 2% after idling vehicle control, when the mobile source contribution reduced from 42.8% to 36.7%. The mobile monitoring also showed significant reductions in atmospheric PM 2.5 , PM 0.1 , polycyclic aromatic hydrocarbons (PAHs), and black carbon (BC) levels by 16.5%, 33.3%, 48.0%, and 11.5%, respectively. Consequently, the restriction of local idling emission was proven to significantly reduce PM and harmful pollutants in the hot spots around the school environment. The emission of idling vehicles strongly affects the levels of particles and relative pollutants in near-ground air around a school area. The PM 2.5 mass concentration at a campus site increased from the background site by 15%, whereas NO 3 - and anthropogenic metals also significantly increased. Meanwhile, the PM 2.5 contribution from mobile source in the campus increased 6.6% from the upwind site. An idling prohibition took place and showed impressive results. Reductions of PM 2.5 , ionic component, and non-natural metal contents were found after the idling prohibition. The mobile monitoring also pointed out a significant improvement with the spatial analysis of PM 2.5 , PM 0.1 , PAH, and black carbon concentrations. These findings are very useful to effectively improve the local air quality of a densely city during the rush hour.

  10. Will the river Irtysh survive the year 2030? Impact of long-term unsuitable land use and water management of the upper stretch of the river catchment (North Kazakhstan)

    NASA Astrophysics Data System (ADS)

    Hrkal, Zbyněk; Gadalia, Alain; Rigaudiere, Pierre

    2006-07-01

    The Irtysh river basin all the way from river spring in China across Kazakhstan as far as the Russian part of Siberia is among the most ecologically endangered and affected regions on our planet. The study provides a summary of the historical reasons for anthropological interventions in this area, which began with the construction of plants of the military—industrial complexes in the forties of the last century during World War II. These plants have a major share in extreme high concentrations of heavy metals in surface as well in groundwaters locally. The Semipalatinsk nuclear polygon plays a specific role as a source of contamination of local waters. The release of top secret data enabled us to gain knowledge about serious problems related to high radioactivity of groundwaters, which should spread uncontrollably through a system of secondary fissures activated by nuclear blasts. Another serious problem in this region is the quantitative aspect of contamination. Model simulations of water balance indicate that large industrial development in the spring area in China and continuously increasing water consumption in Kazakhstan may lead to desiccation of the lower stretch of this large river in Siberia during the summer months of 2030.

  11. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  12. 6.7 radio sky mapping from satellites at very low frequencies

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.

    1991-01-01

    Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.

  13. Radio sky mapping from satellites at very low frequencies

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.

    1991-01-01

    Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.

  14. A new similarity measure for link prediction based on local structures in social networks

    NASA Astrophysics Data System (ADS)

    Aghabozorgi, Farshad; Khayyambashi, Mohammad Reza

    2018-07-01

    Link prediction is a fundamental problem in social network analysis. There exist a variety of techniques for link prediction which applies the similarity measures to estimate proximity of vertices in the network. Complex networks like social networks contain structural units named network motifs. In this study, a newly developed similarity measure is proposed where these structural units are applied as the source of similarity estimation. This similarity measure is tested through a supervised learning experiment framework, where other similarity measures are compared with this similarity measure. The classification model trained with this similarity measure outperforms others of its kind.

  15. Ship localization in Santa Barbara Channel using machine learning classifiers.

    PubMed

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  16. Physico-chemical quality of drinking water in villages of Primary Health Centre, Waghodia, Gujarat (India).

    PubMed

    Desai, Gaurav; Vasisth, Smriti; Patel, Maharshi; Mehta, Vaibhav; Bhavsar, Bharat

    2012-07-01

    16 water samples were collected to study the physical and chemical quality of water of main source of drinking water in the villages of Primary Health Centre, Waghodia of Vadodara district of Gujarat. The values recommended by Indian Standard for Drinking Water (IS 10500:1991) were used for comparison of observed values. The study indicates that the contamination problem in these villages is not alarming at present, but Waghodia being industrial town, ground water quality may deteriorate with passage of time, which needs periodical monitoring. The study provides the local area baseline data which may be useful for the comparison of future study.

  17. Finding the forest in the trees. The challenge of combining diverse environmental data

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Development of analytical and functional guidelines to help researchers and technicians engaged in interdisciplinary research to better plan and implement their supporting data management activities is addressed. An emphasis is on the projects that involve both geophysical and ecological issues. Six case studies were used to identify and to understand problems associated with collecting, integrating, and analyzing environmental data from local to global spatial scales and over a range of temporal scales. These case studies were also used to elaborate the common barriers to interfacing data of disparate sources and types. A number of lessons derived from the case studies are summarized and analyzed.

  18. Imaging with cross-hole seismoelectric tomography

    USGS Publications Warehouse

    Araji, A.H.; Revil, A.; Jardani, A.; Minsley, Burke J.; Karaoulis, M.

    2012-01-01

    We propose a cross-hole imaging approach based on seismoelectric conversions (SC) associated with the transmission of seismic waves from seismic sources located in a borehole to receivers (electrodes) located in a second borehole. The seismoelectric (seismic-to-electric) problem is solved using Biot theory coupled with a generalized Ohm's law with an electrokinetic streaming current contribution. The components of the displacement of the solid phase, the fluid pressure, and the electrical potential are solved using a finite element approach with Perfect Match Layer (PML) boundary conditions for the seismic waves and boundary conditions mimicking an infinite material for the electrostatic problem. We develop an inversion algorithm using the electrical disturbances recorded in the second borehole to localize the position of the heterogeneities responsible for the SC. Because of the ill-posed nature of the inverse problem (inherent to all potential-field problems), regularization is used to constrain the solution at each time in the SC-time window comprised between the time of the seismic shot and the time of the first arrival of the seismic waves in the second borehole. All the inverted volumetric current source densities are aggregated together to produce an image of the position of the heterogeneities between the two boreholes. Two simple synthetic case studies are presented to test this concept. The first case study corresponds to a vertical discontinuity between two homogeneous sub-domains. The second case study corresponds to a poroelastic inclusion (partially saturated by oil) embedded into an homogenous poroelastic formation. In both cases, the position of the heterogeneity is recovered using only the electrical disturbances associated with the SC. That said, a joint inversion of the seismic and seismoelectric data could improve these results.

  19. Strong influence of regional species pools on continent-wide structuring of local communities.

    PubMed

    Lessard, Jean-Philippe; Borregaard, Michael K; Fordyce, James A; Rahbek, Carsten; Weiser, Michael D; Dunn, Robert R; Sanders, Nathan J

    2012-01-22

    There is a long tradition in ecology of evaluating the relative contribution of the regional species pool and local interactions on the structure of local communities. Similarly, a growing number of studies assess the phylogenetic structure of communities, relative to that in the regional species pool, to examine the interplay between broad-scale evolutionary and fine-scale ecological processes. Finally, a renewed interest in the influence of species source pools on communities has shown that the definition of the source pool influences interpretations of patterns of community structure. We use a continent-wide dataset of local ant communities and implement ecologically explicit source pool definitions to examine the relative importance of regional species pools and local interactions for shaping community structure. Then we assess which factors underlie systematic variation in the structure of communities along climatic gradients. We find that the average phylogenetic relatedness of species in ant communities decreases from tropical to temperate regions, but the strength of this relationship depends on the level of ecological realism in the definition of source pools. We conclude that the evolution of climatic niches influences the phylogenetic structure of regional source pools and that the influence of regional source pools on local community structure is strong.

  20. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

Top