Sample records for source term method

  1. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  2. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  3. Attenuation Tomography of Northern California and the Yellow Sea / Korean Peninsula from Coda-source Normalized and Direct Lg Amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Dreger, D S; Phillips, W S

    2008-07-16

    Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less

  4. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  5. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  6. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  7. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms

    PubMed Central

    2014-01-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581

  8. On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel

    2018-05-01

    We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.

  9. A Semi-implicit Treatment of Porous Media in Steady-State CFD.

    PubMed

    Domaingo, Andreas; Langmayr, Daniel; Somogyi, Bence; Almbauer, Raimund

    There are many situations in computational fluid dynamics which require the definition of source terms in the Navier-Stokes equations. These source terms not only allow to model the physics of interest but also have a strong impact on the reliability, stability, and convergence of the numerics involved. Therefore, sophisticated numerical approaches exist for the description of such source terms. In this paper, we focus on the source terms present in the Navier-Stokes or Euler equations due to porous media-in particular the Darcy-Forchheimer equation. We introduce a method for the numerical treatment of the source term which is independent of the spatial discretization and based on linearization. In this description, the source term is treated in a fully implicit way whereas the other flow variables can be computed in an implicit or explicit manner. This leads to a more robust description in comparison with a fully explicit approach. The method is well suited to be combined with coarse-grid-CFD on Cartesian grids, which makes it especially favorable for accelerated solution of coupled 1D-3D problems. To demonstrate the applicability and robustness of the proposed method, a proof-of-concept example in 1D, as well as more complex examples in 2D and 3D, is presented.

  10. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  11. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  12. A simple mass-conserved level set method for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  13. Tracing the source of difficult to settle fine particles which cause turbidity in the Hitotsuse Reservoir, Japan.

    PubMed

    Murakami, Toshiki; Suzuki, Yoshihiro; Oishi, Hiroyuki; Ito, Kenichi; Nakao, Toshio

    2013-05-15

    A unique method to trace the source of "difficult-to-settle fine particles," which are a causative factor of long-term turbidity in reservoirs was developed. This method is characterized by cluster analysis of XRD (X-ray diffraction) data and homology comparison of major component compositions between "difficult-to-settle fine particles" contained in landslide soil samples taken from the upstream of a dam, and suspended "long-term turbid water particles" in the reservoir, which is subject to long-term turbidity. The experiment carried out to validate the proposed method, demonstrated a high possibility of being able to make an almost identical match between "difficult-to-settle fine particles" taken from landslide soils at specific locations and "long-term turbid water particles" taken from a reservoir. This method has the potential to determine substances causing long-term turbidity and the locations of soils from which those substances came. Appropriate countermeasures can then be taken at those specific locations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A method for the development of disease-specific reference standards vocabularies from textual biomedical literature resources

    PubMed Central

    Wang, Liqin; Bray, Bruce E.; Shi, Jianlin; Fiol, Guilherme Del; Haug, Peter J.

    2017-01-01

    Objective Disease-specific vocabularies are fundamental to many knowledge-based intelligent systems and applications like text annotation, cohort selection, disease diagnostic modeling, and therapy recommendation. Reference standards are critical in the development and validation of automated methods for disease-specific vocabularies. The goal of the present study is to design and test a generalizable method for the development of vocabulary reference standards from expert-curated, disease-specific biomedical literature resources. Methods We formed disease-specific corpora from literature resources like textbooks, evidence-based synthesized online sources, clinical practice guidelines, and journal articles. Medical experts annotated and adjudicated disease-specific terms in four classes (i.e., causes or risk factors, signs or symptoms, diagnostic tests or results, and treatment). Annotations were mapped to UMLS concepts. We assessed source variation, the contribution of each source to build disease-specific vocabularies, the saturation of the vocabularies with respect to the number of used sources, and the generalizability of the method with different diseases. Results The study resulted in 2588 string-unique annotations for heart failure in four classes, and 193 and 425 respectively for pulmonary embolism and rheumatoid arthritis in treatment class. Approximately 80% of the annotations were mapped to UMLS concepts. The agreement among heart failure sources ranged between 0.28 and 0.46. The contribution of these sources to the final vocabulary ranged between 18% and 49%. With the sources explored, the heart failure vocabulary reached near saturation in all four classes with the inclusion of minimal six sources (or between four to seven sources if only counting terms occurred in two or more sources). It took fewer sources to reach near saturation for the other two diseases in terms of the treatment class. Conclusions We developed a method for the development of disease-specific reference vocabularies. Expert-curated biomedical literature resources are substantial for acquiring disease-specific medical knowledge. It is feasible to reach near saturation in a disease-specific vocabulary using a relatively small number of literature sources. PMID:26971304

  15. Method and system of filtering and recommending documents

    DOEpatents

    Patton, Robert M.; Potok, Thomas E.

    2016-02-09

    Disclosed is a method and system for discovering documents using a computer and providing a small set of the most relevant documents to the attention of a human observer. Using the method, the computer obtains a seed document from the user and generates a seed document vector using term frequency-inverse corpus frequency weighting. A keyword index for a plurality of source documents can be compared with the weighted terms of the seed document vector. The comparison is then filtered to reduce the number of documents, which define an initial subset of the source documents. Initial subset vectors are generated and compared to the seed document vector to obtain a similarity value for each comparison. Based on the similarity value, the method then recommends one or more of the source documents.

  16. Nonlinear Conservation Laws and Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Leveque, Randall J.

    Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References

  17. High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves

    NASA Technical Reports Server (NTRS)

    Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.

    2012-01-01

    In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.

  18. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  19. Who Meets the Contraceptive Needs of Young Women in Sub-Saharan Africa?

    PubMed

    Radovich, Emma; Dennis, Mardieh L; Wong, Kerry L M; Ali, Moazzam; Lynch, Caroline A; Cleland, John; Owolabi, Onikepe; Lyons-Amos, Mark; Benova, Lenka

    2018-03-01

    Despite efforts to expand contraceptive access for young people, few studies have considered where young women (age 15-24) in low- and middle-income countries obtain modern contraceptives and how the capacity and content of care of sources used compares with older users. We examined the first source of respondents' current modern contraceptive method using the most recent Demographic and Health Survey since 2000 for 33 sub-Saharan African countries. We classified providers according to sector (public/private) and capacity to provide a range of short- and long-term methods (limited/comprehensive). We also compared the content of care obtained from different providers. Although the public and private sectors were both important sources of family planning (FP), young women (15-24) used more short-term methods obtained from limited-capacity, private providers, compared with older women. The use of long-term methods among young women was low, but among those users, more than 85% reported a public sector source. Older women (25+) were significantly more likely to utilize a comprehensive provider in either sector compared with younger women. Although FP users of all ages reported poor content of care across all providers, young women had even lower content of care. The results suggest that method and provider choice are strongly linked, and recent efforts to increase access to long-term methods among young women may be restricted by where they seek care. Interventions to increase adolescents' access to a range of FP methods and quality counseling should target providers frequently used by young people, including limited-capacity providers in the private sector. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  20. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  1. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    PubMed

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  3. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  4. Circular current loops, magnetic dipoles and spherical harmonic analysis.

    USGS Publications Warehouse

    Alldredge, L.R.

    1980-01-01

    Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author

  5. Aerostat-lofted instrument and sampling method for determination of emissions from open area sources

    EPA Science Inventory

    An aerostat-borne instrument and sampling method was developed to characterize air samples from area sources, such as emissions from open burning. The 10 kg battery-powered instrument system, termed "the Flyer," is lofted with a helium-filled aerostat of 4 m nominal diameter and ...

  6. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  7. Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods

    NASA Astrophysics Data System (ADS)

    Haworth, Daniel

    2013-11-01

    The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.

  8. Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions

    DTIC Science & Technology

    2007-12-06

    high order well-balanced schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006...schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006), pp.69-80. 39. Y. Xu and C.-W

  9. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  10. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  11. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  12. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  13. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  14. 48 CFR 217.174 - Multiyear contracts for electricity from renewable energy sources.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... electricity from renewable energy sources. 217.174 Section 217.174 Federal Acquisition Regulations System... SPECIAL CONTRACTING METHODS Mulityear Contracting 217.174 Multiyear contracts for electricity from... not to exceed 10 years for the purchase of electricity from sources of renewable energy, as that term...

  15. 48 CFR 217.174 - Multiyear contracts for electricity from renewable energy sources.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... electricity from renewable energy sources. 217.174 Section 217.174 Federal Acquisition Regulations System... SPECIAL CONTRACTING METHODS Mulityear Contracting 217.174 Multiyear contracts for electricity from... not to exceed 10 years for the purchase of electricity from sources of renewable energy, as that term...

  16. 48 CFR 217.174 - Multiyear contracts for electricity from renewable energy sources.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... electricity from renewable energy sources. 217.174 Section 217.174 Federal Acquisition Regulations System... SPECIAL CONTRACTING METHODS Mulityear Contracting 217.174 Multiyear contracts for electricity from... not to exceed 10 years for the purchase of electricity from sources of renewable energy, as that term...

  17. 48 CFR 217.174 - Multiyear contracts for electricity from renewable energy sources.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... electricity from renewable energy sources. 217.174 Section 217.174 Federal Acquisition Regulations System... SPECIAL CONTRACTING METHODS Mulityear Contracting 217.174 Multiyear contracts for electricity from... not to exceed 10 years for the purchase of electricity from sources of renewable energy, as that term...

  18. 48 CFR 217.175 - Multiyear contracts for electricity from renewable energy sources.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... electricity from renewable energy sources. 217.175 Section 217.175 Federal Acquisition Regulations System... SPECIAL CONTRACTING METHODS Mulityear Contracting 217.175 Multiyear contracts for electricity from... not to exceed 10 years for the purchase of electricity from sources of renewable energy, as that term...

  19. An Analysis of Social, Literary and Technological Sources Used by Classroom Teachers in Social Studies Courses

    ERIC Educational Resources Information Center

    Fidan, Nuray Kurtdede; Ergün, Mustafa

    2016-01-01

    In this study, social, literary and technological sources used by classroom teachers in social studies courses are analyzed in terms of frequency. The study employs mixed methods research and is designed following the convergent parallel design. In the qualitative part of the study, phenomenological method was used and in the quantitative…

  20. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  1. Identification of Spurious Signals from Permeable Ffowcs Williams and Hawkings Surfaces

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V.; Boyd, David D., Jr.; Nark, Douglas M.; Wiedemann, Karl E.

    2017-01-01

    Integral forms of the permeable surface formulation of the Ffowcs Williams and Hawkings (FW-H) equation often require an input in the form of a near field Computational Fluid Dynamics (CFD) solution to predict noise in the near or far field from various types of geometries. The FW-H equation involves three source terms; two surface terms (monopole and dipole) and a volume term (quadrupole). Many solutions to the FW-H equation, such as several of Farassat's formulations, neglect the quadrupole term. Neglecting the quadrupole term in permeable surface formulations leads to inaccuracies called spurious signals. This paper explores the concept of spurious signals, explains how they are generated by specifying the acoustic and hydrodynamic surface properties individually, and provides methods to determine their presence, regardless of whether a correction algorithm is employed. A potential approach based on the equivalent sources method (ESM) and the sensitivity of Formulation 1A (Formulation S1A) is also discussed for the removal of spurious signals.

  2. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.

    PubMed

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.

  3. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  4. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  5. Making the right long-term prescription for medical equipment financing.

    PubMed

    Conbeer, George P

    2007-06-01

    For hospital financial executives charged with assessing new technologies, obtaining access to sufficient information to support an in-depth analysis can be a daunting challenge. The information should come not only from direct sources, such as the equipment manufacturer, but also from indirect sources, such as leasing companies. A thorough knowledge of financing methods--including tax-exempt bonds, bank debt, standard leasing, tax-exempt leasing, and equipment rental terms-is critical.

  6. LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Wu, Hao; Ihme, Matthias

    2015-11-01

    The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.

  7. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  8. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  9. Data-optimized source modeling with the Backwards Liouville Test–Kinetic method

    DOE PAGES

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  10. REVIEW OF METHODS FOR REMOTE SENSING OF ATMOSPHERIC EMISSIONS FROM STATIONARY SOURCES

    EPA Science Inventory

    The report reviews the commercially available and developing technologies for the application of remote sensing to the measurement of source emissions. The term 'remote sensing technology', as applied in the report, means the detection or concentration measurement of trace atmosp...

  11. Auditing the multiply-related concepts within the UMLS

    PubMed Central

    Mougin, Fleur; Grabar, Natalia

    2014-01-01

    Objective This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. Methods We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. Results At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Discussion Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. PMID:24464853

  12. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less

  13. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. The need for harmonization of methods for finding locations and magnitudes of air pollution sources using observations of concentrations and wind fields

    NASA Astrophysics Data System (ADS)

    Hanna, Steven R.; Young, George S.

    2017-01-01

    What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);

  15. A hybrid approach for nonlinear computational aeroacoustics predictions

    NASA Astrophysics Data System (ADS)

    Sassanis, Vasileios; Sescu, Adrian; Collins, Eric M.; Harris, Robert E.; Luke, Edward A.

    2017-01-01

    In many aeroacoustics applications involving nonlinear waves and obstructions in the far-field, approaches based on the classical acoustic analogy theory or the linearised Euler equations are unable to fully characterise the acoustic field. Therefore, computational aeroacoustics hybrid methods that incorporate nonlinear wave propagation have to be constructed. In this study, a hybrid approach coupling Navier-Stokes equations in the acoustic source region with nonlinear Euler equations in the acoustic propagation region is introduced and tested. The full Navier-Stokes equations are solved in the source region to identify the acoustic sources. The flow variables of interest are then transferred from the source region to the acoustic propagation region, where the full nonlinear Euler equations with source terms are solved. The transition between the two regions is made through a buffer zone where the flow variables are penalised via a source term added to the Euler equations. Tests were conducted on simple acoustic and vorticity disturbances, two-dimensional jets (Mach 0.9 and 2), and a three-dimensional jet (Mach 1.5), impinging on a wall. The method is proven to be effective and accurate in predicting sound pressure levels associated with the propagation of linear and nonlinear waves in the near- and far-field regions.

  16. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  17. Developing Design Criteria and Scale Up Methods for Water-Stable Metal-Organic Frameworks for Adsorption Applications

    DTIC Science & Technology

    2014-09-08

    Figure 1.4: Number of publications containing the term “metal-organic frameworks” (Source: ISI Web of Science, retrieved April, 14 th , 2014) 8...1.4 Number of publications containing the term “metal-organic frameworks” (Source: ISI Web of Science, retrieved April, 14 th , 2014). 1.4...recorded with a PerkinElmer Spectrum One 10 in the range 400 – 4000 cm -1 . To record the IR spectrum, an IR beam is passed through the sample (in

  18. A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method

    NASA Astrophysics Data System (ADS)

    Skinner, M. A.; Ostriker, E. C.

    2013-04-01

    We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.

  19. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  20. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  1. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  2. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  3. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  4. Understanding cancer survivors’ information needs and information-seeking behaviors for complementary and alternative medicine from short- to long-term survival: a mixed-methods study

    PubMed Central

    Scarton, Lou Ann; Del Fiol, Guilherme; Oakley-Girvan, Ingrid; Gibson, Bryan; Logan, Robert; Workman, T. Elizabeth

    2018-01-01

    Objective The research examined complementary and alternative medicine (CAM) information-seeking behaviors and preferences from short- to long-term cancer survival, including goals, motivations, and information sources. Methods A mixed-methods approach was used with cancer survivors from the “Assessment of Patients’ Experience with Cancer Care” 2004 cohort. Data collection included a mail survey and phone interviews using the critical incident technique (CIT). Results Seventy survivors from the 2004 study responded to the survey, and eight participated in the CIT interviews. Quantitative results showed that CAM usage did not change significantly between 2004 and 2015. The following themes emerged from the CIT: families’ and friends’ provision of the initial introduction to a CAM, use of CAM to manage the emotional and psychological impact of cancer, utilization of trained CAM practitioners, and online resources as a prominent source for CAM information. The majority of participants expressed an interest in an online information-sharing portal for CAM. Conclusion Patients continue to use CAM well into long-term cancer survivorship. Finding trustworthy sources for information on CAM presents many challenges such as reliability of source, conflicting information on efficacy, and unknown interactions with conventional medications. Study participants expressed interest in an online portal to meet these needs through patient testimonials and linkage of claims to the scientific literature. Such a portal could also aid medical librarians and clinicians in locating and evaluating CAM information on behalf of patients. PMID:29339938

  5. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  6. Direct design of aspherical lenses for extended non-Lambertian sources in three-dimensional rotational geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong

    2016-01-01

    Illumination design used to redistribute the spatial energy distribution of light source is a key technique in lighting applications. However, there is still no effective illumination design method for extended sources, especially for extended non-Lambertian sources. What we present here is to our knowledge the first direct method for extended non-Lambertian sources in three-dimensional (3D) rotational geometry. In this method, both meridional rays and skew rays of the extended source are taken into account to tailor the lens profile in the meridional plane. A set of edge rays and interior rays emitted from the extended source which will take a given direction after the refraction of the aspherical lens are found by the Snell’s law, and the output intensity at this direction is then calculated to be the integral of the luminance function of the outgoing rays at this direction. This direct method is effective for both extended non-Lambertian sources and extended Lambertian sources in 3D rotational symmetry, and can directly find a solution to the prescribed design problem without cumbersome iterative illuminance compensation. Two examples are presented to demonstrate the effectiveness of the proposed method in terms of performance and capacity for tackling complex designs. PMID:26832484

  7. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  8. Proceedings of the Annual DARPA/AFGL Seismic Research Symposium (7th) Held in Colorado Springs, Colorado on 6-8 May 1985

    DTIC Science & Technology

    1990-11-08

    seismograms were calculated for the three fundemental sources needed to construct an arbitrarily oriented dislocation or deviatoric moment tensor...or the first motion approximation method(FMA). Vertical and radial displacements for the three fundemental source terms are shown since each source...significantly interfere with the SV body wave to produce varying levels of distortion of the waveform among the three fundemental sources. Note, for example

  9. 40 CFR 406.11 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.11 Specialized definitions... and methods of analysis set forth in 40 CFR part 401 shall apply to this subpart. (b) The term corn shall mean the shelled corn delivered to a plant before processing. (c) The term standard bushel shall...

  10. EMUstack: An open source route to insightful electromagnetic computation via the Bloch mode scattering matrix method

    NASA Astrophysics Data System (ADS)

    Sturmberg, Björn C. P.; Dossou, Kokou B.; Lawrence, Felix J.; Poulton, Christopher G.; McPhedran, Ross C.; Martijn de Sterke, C.; Botten, Lindsay C.

    2016-05-01

    We describe EMUstack, an open-source implementation of the Scattering Matrix Method (SMM) for solving field problems in layered media. The fields inside nanostructured layers are described in terms of Bloch modes that are found using the Finite Element Method (FEM). Direct access to these modes allows the physical intuition of thin film optics to be extended to complex structures. The combination of the SMM and the FEM makes EMUstack ideally suited for studying lossy, high-index contrast structures, which challenge conventional SMMs.

  11. Bound-preserving modified exponential Runge-Kutta discontinuous Galerkin methods for scalar hyperbolic equations with stiff source terms

    NASA Astrophysics Data System (ADS)

    Huang, Juntao; Shu, Chi-Wang

    2018-05-01

    In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.

  12. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  13. Design of compact and ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.; Liang, Rongguang

    2016-01-01

    The energy efficiency and compactness of an illumination system are two main concerns in illumination design for extended sources. In this paper, we present two methods to design compact, ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry. The light rays are directed by using two aspherical surfaces in the first method and one aspherical surface along with an optimized parabola in the second method. The principles and procedures of each design method are introduced in detail. Three examples are presented to demonstrate the effectiveness of these two methods in terms of performance and capacity in designing compact, ultra efficient aspherical lenses. The comparisons made between the two proposed methods indicate that the second method is much simpler and easier to be implemented, and has an excellent extensibility to three-dimensional designs. PMID:29092336

  14. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  15. Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.

    PubMed

    Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K

    2018-03-13

    Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.

  16. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  17. Risk assessment of water pollution sources based on an integrated k-means clustering and set pair analysis method in the region of Shiyan, China.

    PubMed

    Li, Chunhui; Sun, Lian; Jia, Junxiang; Cai, Yanpeng; Wang, Xuan

    2016-07-01

    Source water areas are facing many potential water pollution risks. Risk assessment is an effective method to evaluate such risks. In this paper an integrated model based on k-means clustering analysis and set pair analysis was established aiming at evaluating the risks associated with water pollution in source water areas, in which the weights of indicators were determined through the entropy weight method. Then the proposed model was applied to assess water pollution risks in the region of Shiyan in which China's key source water area Danjiangkou Reservoir for the water source of the middle route of South-to-North Water Diversion Project is located. The results showed that eleven sources with relative high risk value were identified. At the regional scale, Shiyan City and Danjiangkou City would have a high risk value in term of the industrial discharge. Comparatively, Danjiangkou City and Yunxian County would have a high risk value in terms of agricultural pollution. Overall, the risk values of north regions close to the main stream and reservoir of the region of Shiyan were higher than that in the south. The results of risk level indicated that five sources were in lower risk level (i.e., level II), two in moderate risk level (i.e., level III), one in higher risk level (i.e., level IV) and three in highest risk level (i.e., level V). Also risks of industrial discharge are higher than that of the agricultural sector. It is thus essential to manage the pillar industry of the region of Shiyan and certain agricultural companies in the vicinity of the reservoir to reduce water pollution risks of source water areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A novel method for detecting light source for digital images forensic

    NASA Astrophysics Data System (ADS)

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  19. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  20. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  1. Matrix effect and recovery terminology issues in regulated drug bioanalysis.

    PubMed

    Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard

    2012-02-01

    Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.

  2. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  3. The Osher scheme for non-equilibrium reacting flows

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1992-01-01

    An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.

  4. 40 CFR 408.11 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Farm-Raised Catfish Processing... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971...

  5. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  6. Long-Term Variations of the EOP and ICRF2

    NASA Technical Reports Server (NTRS)

    Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga

    2010-01-01

    We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.

  7. A novel method for fast imaging of brain function, non-invasively, with light

    NASA Astrophysics Data System (ADS)

    Chance, Britton; Anday, Endla; Nioka, Shoko; Zhou, Shuoming; Hong, Long; Worden, Katherine; Li, C.; Murray, T.; Ovetsky, Y.; Pidikiti, D.; Thomas, R.

    1998-05-01

    Imaging of the human body by any non-invasive technique has been an appropriate goal of physics and medicine, and great success has been obtained with both Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) in brain imaging. Non-imaging responses to functional activation using near infrared spectroscopy of brain (fNIR) obtained in 1993 (Chance, et al. [1]) and in 1994 (Tamura, et al. [2]) are now complemented with images of pre-frontal and parietal stimulation in adults and pre-term neonates in this communication (see also [3]). Prior studies used continuous [4], pulsed [3] or modulated [5] light. The amplitude and phase cancellation of optical patterns as demonstrated for single source detector pairs affords remarkable sensitivity of small object detection in model systems [6]. The methods have now been elaborated with multiple source detector combinations (nine sources, four detectors). Using simple back projection algorithms it is now possible to image sensorimotor and cognitive activation of adult and pre- and full-term neonate human brain function in times < 30 sec and with two dimensional resolutions of < 1 cm in two dimensional displays. The method can be used in evaluation of adult and neonatal cerebral dysfunction in a simple, portable and affordable method that does not require immobilization, as contrasted to MRI and PET.

  8. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  9. An efficient unstructured WENO method for supersonic reactive flows

    NASA Astrophysics Data System (ADS)

    Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da

    2018-03-01

    An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.

  10. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR ATMOSPHERIC CALIBRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rioja, M.; Dodson, R., E-mail: maria.rioja@icrar.org

    2011-04-15

    We describe a new method which achieves high-precision very long baseline interferometry (VLBI) astrometry in observations at millimeter (mm) wavelengths. It combines fast frequency-switching observations, to correct for the dominant non-dispersive tropospheric fluctuations, with slow source-switching observations, for the remaining ionospheric dispersive terms. We call this method source-frequency phase referencing. Provided that the switching cycles match the properties of the propagation media, one can recover the source astrometry. We present an analytic description of the two-step calibration strategy, along with an error analysis to characterize its performance. Also, we provide observational demonstrations of a successful application with observations using themore » Very Long Baseline Array at 86 GHz of the pairs of sources 3C274 and 3C273 and 1308+326 and 1308+328 under various conditions. We conclude that this method is widely applicable to mm-VLBI observations of many target sources, and unique in providing bona fide astrometrically registered images and high-precision relative astrometric measurements in mm-VLBI using existing and newly built instruments, including space VLBI.« less

  11. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  12. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less

  13. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  14. Assessment of groundwater exploitation in an aquifer using the random walk on grid method: a case study at Ordos, China

    NASA Astrophysics Data System (ADS)

    Nan, Tongchao; Li, Kaixuan; Wu, Jichun; Yin, Lihe

    2018-04-01

    Sustainability has been one of the key criteria of effective water exploitation. Groundwater exploitation and water-table decline at Haolebaoji water source site in the Ordos basin in NW China has drawn public attention due to concerns about potential threats to ecosystems and grazing land in the area. To better investigate the impact of production wells at Haolebaoji on the water table, an adapted algorithm called the random walk on grid method (WOG) is applied to simulate the hydraulic head in the unconfined and confined aquifers. This is the first attempt to apply WOG to a real groundwater problem. The method can not only evaluate the head values but also the contributions made by each source/sink term. One is allowed to analyze the impact of source/sink terms just as if one had an analytical solution. The head values evaluated by WOG match the values derived from the software Groundwater Modeling System (GMS). It suggests that WOG is effective and applicable in a heterogeneous aquifer with respect to practical problems, and the resultant information is useful for groundwater management.

  15. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  16. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  17. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  18. GRAPHIC SOURCES FOR THE TEACHING OF RESTORATION ACTING STYLE, AN APPROACH TO THE ACTING OF RESTORATION COMEDY. FINAL REPORT.

    ERIC Educational Resources Information Center

    HENSHAW, NANCY WANDALIE

    THIS SOURCE BOOK TRANSLATES THE ELEGANT AND SOMEWHAT ALIEN WORLD OF RESTORATION COMEDY INTO TERMS THAT CAN ENABLE AMERICAN DIRECTORS AND ACTORS--BY EMPLOYING THE ACTING "METHOD" OF CONTEMPORARY PSYCHOLOGICAL REALISM--TO SIMULATE THE EXPERIENCE, PERCEPTION, AND EXPRESSION OF THE 17TH-CENTURY ENGLISH ARISTOCRAT. TO ENCOURAGE DIRECTORS TO IMMERSE…

  19. Bias in Terms of Culture and a Method for Reducing It: An Eight-Country "Explanations of Unemployment Scale" Study

    ERIC Educational Resources Information Center

    Mylonas, Kostas; Furnham, Adrian; Divale, William; Leblebici, Cigdem; Gondim, Sonia; Moniz, Angela; Grad, Hector; Alvaro, Jose Luis; Cretu, Romeo Zeno; Filus, Ania; Boski, Pawel

    2014-01-01

    Several sources of bias can plague research data and individual assessment. When cultural groups are considered, across or even within countries, it is essential that the constructs assessed and evaluated are as free as possible from any source of bias and specifically from bias caused due to culturally specific characteristics. Employing the…

  20. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  1. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  2. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  3. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  4. Does the Method of Weight Loss Effect Long-Term Changes in Weight, Body Composition or Chronic Disease Risk Factors in Overweight or Obese Adults? A Systematic Review

    PubMed Central

    Washburn, Richard A.; Szabo, Amanda N.; Lambourne, Kate; Willis, Erik A.; Ptomey, Lauren T.; Honas, Jeffery J.; Herrmann, Stephen D.; Donnelly, Joseph E.

    2014-01-01

    Background Differences in biological changes from weight loss by energy restriction and/or exercise may be associated with differences in long-term weight loss/regain. Objective To assess the effect of weight loss method on long-term changes in weight, body composition and chronic disease risk factors. Data Sources PubMed and Embase were searched (January 1990-October 2013) for studies with data on the effect of energy restriction, exercise (aerobic and resistance) on long-term weight loss. Twenty articles were included in this review. Study Eligibility Criteria Primary source, peer reviewed randomized trials published in English with an active weight loss period of >6 months, or active weight loss with a follow-up period of any duration, conducted in overweight or obese adults were included. Study Appraisal and Synthesis Methods Considerable heterogeneity across trials existed for important study parameters, therefore a meta-analysis was considered inappropriate. Results were synthesized and grouped by comparisons (e.g. diet vs. aerobic exercise, diet vs. diet + aerobic exercise etc.) and study design (long-term or weight loss/follow-up). Results Forty percent of trials reported significantly greater long-term weight loss with diet compared with aerobic exercise, while results for differences in weight regain were inconclusive. Diet+aerobic exercise resulted in significantly greater weight loss than diet alone in 50% of trials. However, weight regain (∼55% of loss) was similar in diet and diet+aerobic exercise groups. Fat-free mass tended to be preserved when interventions included exercise. PMID:25333384

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  6. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  7. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation.

    PubMed

    Cohen, Michael X; Gulbinaite, Rasa

    2017-02-15

    Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A new DOD and DOA estimation method for MIMO radar

    NASA Astrophysics Data System (ADS)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2018-04-01

    The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.

  9. Understanding cancer survivors' information needs and information-seeking behaviors for complementary and alternative medicine from short- to long-term survival: a mixed-methods study.

    PubMed

    Scarton, Lou Ann; Del Fiol, Guilherme; Oakley-Girvan, Ingrid; Gibson, Bryan; Logan, Robert; Workman, T Elizabeth

    2018-01-01

    The research examined complementary and alternative medicine (CAM) information-seeking behaviors and preferences from short- to long-term cancer survival, including goals, motivations, and information sources. A mixed-methods approach was used with cancer survivors from the "Assessment of Patients' Experience with Cancer Care" 2004 cohort. Data collection included a mail survey and phone interviews using the critical incident technique (CIT). Seventy survivors from the 2004 study responded to the survey, and eight participated in the CIT interviews. Quantitative results showed that CAM usage did not change significantly between 2004 and 2015. The following themes emerged from the CIT: families' and friends' provision of the initial introduction to a CAM, use of CAM to manage the emotional and psychological impact of cancer, utilization of trained CAM practitioners, and online resources as a prominent source for CAM information. The majority of participants expressed an interest in an online information-sharing portal for CAM. Patients continue to use CAM well into long-term cancer survivorship. Finding trustworthy sources for information on CAM presents many challenges such as reliability of source, conflicting information on efficacy, and unknown interactions with conventional medications. Study participants expressed interest in an online portal to meet these needs through patient testimonials and linkage of claims to the scientific literature. Such a portal could also aid medical librarians and clinicians in locating and evaluating CAM information on behalf of patients.

  10. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  11. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  12. Passive Localization of Multiple Sources Using Widely-Spaced Arrays With Application to Marine Mammals

    DTIC Science & Technology

    2008-09-30

    developing methods to simultaneously track multiple vocalizing marine mammals, we hope to contribute to the fields of marine mammal bioacoustics, ecology ...mammals, we hope to contribute to the fields of marine mammal bioacoustics, ecology , and anthropogenic impact mitigation. 15. SUBJECT TERMS 16. SECURITY...N00014-05-1-0074 (OA Graduate Traineeship for E-M Nosal) LONG-TERM GOALS The long-term goal of our research is to develop algorithms that use widely

  13. An efficient and stable hydrodynamic model with novel source term discretization schemes for overland flow and flood simulations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming

    2017-05-01

    Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.

  14. Modeling and observations of an elevated, moving infrasonic source: Eigenray methods.

    PubMed

    Blom, Philip; Waxler, Roger

    2017-04-01

    The acoustic ray tracing relations are extended by the inclusion of auxiliary parameters describing variations in the spatial ray coordinates and eikonal vector due to changes in the initial conditions. Computation of these parameters allows one to define the geometric spreading factor along individual ray paths and assists in identification of caustic surfaces so that phase shifts can be easily identified. A method is developed leveraging the auxiliary parameters to identify propagation paths connecting specific source-receiver geometries, termed eigenrays. The newly introduced method is found to be highly efficient in cases where propagation is non-planar due to horizontal variations in the propagation medium or the presence of cross winds. The eigenray method is utilized in analysis of infrasonic signals produced by a multi-stage sounding rocket launch with promising results for applications of tracking aeroacoustic sources in the atmosphere and specifically to analysis of motor performance during dynamic tests.

  15. Source Credibility in Tobacco Control Messaging

    PubMed Central

    Schmidt, Allison M.; Ranney, Leah M.; Pepper, Jessica K.; Goldstein, Adam O.

    2016-01-01

    Objectives Perceived credibility of a message’s source can affect persuasion. This paper reviews how beliefs about the source of tobacco control messages may encourage attitude and behavior change. Methods We conducted a series of searches of the peer-reviewed literature using terms from communication and public health fields. We reviewed research on source credibility, its underlying concepts, and its relation to the persuasiveness of tobacco control messages. Results We recommend an agenda for future research to bridge the gaps between communication literature on source credibility and tobacco control research. Our recommendations are to study the impact of source credibility on persuasion with long-term behavior change outcomes, in different populations and demographic groups, by developing new credibility measures that are topic- and organization-specific, by measuring how credibility operates across media platforms, and by identifying factors that enhance credibility and persuasion. Conclusions This manuscript reviews the state of research on source credibility and identifies gaps that are maximally relevant to tobacco control communication. Knowing first whether a source is perceived as credible, and second, how to enhance perceived credibility, can inform the development of future tobacco control campaigns and regulatory communications. PMID:27525298

  16. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  17. Investigation of Magnetotelluric Source Effect Based on Twenty Years of Telluric and Geomagnetic Observation

    NASA Astrophysics Data System (ADS)

    Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.

    2016-12-01

    Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.

  18. Enhanced Elliptic Grid Generation

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2007-01-01

    An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.

  19. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  20. The numerical dynamic for highly nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  1. Analysis of an entrainment model of the jet in a crossflow

    NASA Technical Reports Server (NTRS)

    Chang, H. S.; Werner, J. E.

    1972-01-01

    A theoretical model has been proposed for the problem of a round jet in an incompressible cross-flow. The method of matched asymptotic expansions has been applied to this problem. For the solution to the flow problem in the inner region, the re-entrant wake flow model was used with the re-entrant flow representing the fluid entrained by the jet. Higher order corrections are obtained in terms of this basic solution. The perturbation terms in the outer region was found to be a line distribution of doublets and sources. The line distribution of sources represents the combined effect of the entrainment and the displacement.

  2. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  3. Weighted Regressions on Time, Discharge, and Season (WRTDS), with an application to Chesapeake Bay River inputs

    USGS Publications Warehouse

    Hirsch, Robert M.; Moyer, Douglas; Archfield, Stacey A.

    2010-01-01

    A new approach to the analysis of long-term surface water-quality data is proposed and implemented. The goal of this approach is to increase the amount of information that is extracted from the types of rich water-quality datasets that now exist. The method is formulated to allow for maximum flexibility in representations of the long-term trend, seasonal components, and discharge-related components of the behavior of the water-quality variable of interest. It is designed to provide internally consistent estimates of the actual history of concentrations and fluxes as well as histories that eliminate the influence of year-to-year variations in streamflow. The method employs the use of weighted regressions of concentrations on time, discharge, and season. Finally, the method is designed to be useful as a diagnostic tool regarding the kinds of changes that are taking place in the watershed related to point sources, groundwater sources, and surface-water nonpoint sources. The method is applied to datasets for the nine large tributaries of Chesapeake Bay from 1978 to 2008. The results show a wide range of patterns of change in total phosphorus and in dissolved nitrate plus nitrite. These results should prove useful in further examination of the causes of changes, or lack of changes, and may help inform decisions about future actions to reduce nutrient enrichment in the Chesapeake Bay and its watershed.

  4. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  5. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  6. Next Generation of Leaching Tests

    EPA Science Inventory

    A corresponding abstract has been cleared for this presentation. The four methods comprising the Leaching Environmental Assessment Framework are described along with the tools to support implementation of the more rigorous and accurate source terms that are developed using LEAF ...

  7. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  8. Moment Tensor Analysis of Shallow Sources

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.; Yoo, S. H.

    2015-12-01

    A potential issue for moment tensor inversion of shallow seismic sources is that some moment tensor components have vanishing amplitudes at the free surface, which can result in bias in the moment tensor solution. The effects of the free-surface on the stability of the moment tensor method becomes important as we continue to investigate and improve the capabilities of regional full moment tensor inversion for source-type identification and discrimination. It is important to understand these free surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have shallow seismicity such as volcanoes and geothermal systems. In this study, we apply the moment tensor based discrimination method to the HUMMING ALBATROSS quarry blasts. These shallow chemical explosions at approximately 10 m depth and recorded up to several kilometers distance represent rather severe source-station geometry in terms of vanishing traction issues. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first motion method enables the unique discrimination of these events. Recovering the correct yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique.

  9. Combining molecular fingerprints with multidimensional scaling analyses to identify the source of spilled oil from highly similar suspected oils.

    PubMed

    Zhou, Peiyu; Chen, Changshu; Ye, Jianjun; Shen, Wenjie; Xiong, Xiaofei; Hu, Ping; Fang, Hongda; Huang, Chuguang; Sun, Yongge

    2015-04-15

    Oil fingerprints have been a powerful tool widely used for determining the source of spilled oil. In most cases, this tool works well. However, it is usually difficult to identify the source if the oil spill accident occurs during offshore petroleum exploration due to the highly similar physiochemical characteristics of suspected oils from the same drilling platform. In this report, a case study from the waters of the South China Sea is presented, and multidimensional scaling analysis (MDS) is introduced to demonstrate how oil fingerprints can be combined with mathematical methods to identify the source of spilled oil from highly similar suspected sources. The results suggest that the MDS calculation based on oil fingerprints and subsequently integrated with specific biomarkers in spilled oils is the most effective method with a great potential for determining the source in terms of highly similar suspected oils. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Effects of volcano topography on seismic broad-band waveforms

    NASA Astrophysics Data System (ADS)

    Neuberg, Jürgen; Pointer, Tim

    2000-10-01

    Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.

  11. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  12. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  13. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  14. Analytical method for optimal source reduction with monitored natural attenuation in contaminated aquifers

    USGS Publications Warehouse

    Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,

    2003-01-01

    A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.

  15. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Generalized reference fields and source interpolation for the difference formulation of radiation transport

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham

    2010-03-01

    In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.

  17. Computational electrodynamics in material media with constraint-preservation, multidimensional Riemann solvers and sub-cell resolution - Part II, higher order FVTD schemes

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Garain, Sudip; Taflove, Allen; Montecinos, Gino

    2018-02-01

    The Finite Difference Time Domain (FDTD) scheme has served the computational electrodynamics community very well and part of its success stems from its ability to satisfy the constraints in Maxwell's equations. Even so, in the previous paper of this series we were able to present a second order accurate Godunov scheme for computational electrodynamics (CED) which satisfied all the same constraints and simultaneously retained all the traditional advantages of Godunov schemes. In this paper we extend the Finite Volume Time Domain (FVTD) schemes for CED in material media to better than second order of accuracy. From the FDTD method, we retain a somewhat modified staggering strategy of primal variables which enables a very beneficial constraint-preservation for the electric displacement and magnetic induction vector fields. This is accomplished with constraint-preserving reconstruction methods which are extended in this paper to third and fourth orders of accuracy. The idea of one-dimensional upwinding from Godunov schemes has to be significantly modified to use the multidimensionally upwinded Riemann solvers developed by the first author. In this paper, we show how they can be used within the context of a higher order scheme for CED. We also report on advances in timestepping. We show how Runge-Kutta IMEX schemes can be adapted to CED even in the presence of stiff source terms brought on by large conductivities as well as strong spatial variations in permittivity and permeability. We also formulate very efficient ADER timestepping strategies to endow our method with sub-cell resolving capabilities. As a result, our method can be stiffly-stable and resolve significant sub-cell variation in the material properties within a zone. Moreover, we present ADER schemes that are applicable to all hyperbolic PDEs with stiff source terms and at all orders of accuracy. Our new ADER formulation offers a treatment of stiff source terms that is much more efficient than previous ADER schemes. The computer algebra system scripts for generating ADER time update schemes for any general PDE with stiff source terms are also given in the electronic supplements to this paper. Second, third and fourth order accurate schemes for numerically solving Maxwell's equations in material media are presented in this paper. Several stringent tests are also presented to show that the method works and meets its design goals even when material permittivity and permeability vary by an order of magnitude over just a few zones. Furthermore, since the method is unconditionally stable and sub-cell-resolving in the presence of stiff source terms (i.e. for problems involving giant variations in conductivity over just a few zones), it can accurately handle such problems without any reduction in timestep. We also show that increasing the order of accuracy offers distinct advantages for resolving sub-cell variations in material properties. Most importantly, we show that when the accuracy requirements are stringent the higher order schemes offer the shortest time to solution. This makes a compelling case for the use of higher order, sub-cell resolving schemes in CED.

  18. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  19. Spurious Solutions Of Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1992-01-01

    Report utilizes nonlinear-dynamics approach to investigate possible sources of errors and slow convergence and non-convergence of steady-state numerical solutions when using time-dependent approach for problems containing nonlinear source terms. Emphasizes implications for development of algorithms in CFD and computational sciences in general. Main fundamental conclusion of study is that qualitative features of nonlinear differential equations cannot be adequately represented by finite-difference method and vice versa.

  20. A numerical method for shock driven multiphase flow with evaporating particles

    NASA Astrophysics Data System (ADS)

    Dahal, Jeevan; McFarland, Jacob A.

    2017-09-01

    A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.

  1. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  2. Computations of steady-state and transient premixed turbulent flames using pdf methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulek, T.; Lindstedt, R.P.

    1996-03-01

    Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less

  3. Methods to determine long-term durability of Wisconsin aggregates.

    DOT National Transportation Integrated Search

    2013-02-01

    Wisconsin uses approximately 10 to 11 million tons of aggregates annually in transportation infrastructure projects in the state. The quality of aggregates has a tremendous influence on the performance and durability of roadways and bridges. As sourc...

  4. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  5. Extended lattice Boltzmann scheme for droplet combustion.

    PubMed

    Ashna, Mostafa; Rahimian, Mohammad Hassan; Fakhari, Abbas

    2017-05-01

    The available lattice Boltzmann (LB) models for combustion or phase change are focused on either single-phase flow combustion or two-phase flow with evaporation assuming a constant density for both liquid and gas phases. To pave the way towards simulation of spray combustion, we propose a two-phase LB method for modeling combustion of liquid fuel droplets. We develop an LB scheme to model phase change and combustion by taking into account the density variation in the gas phase and accounting for the chemical reaction based on the Cahn-Hilliard free-energy approach. Evaporation of liquid fuel is modeled by adding a source term, which is due to the divergence of the velocity field being nontrivial, in the continuity equation. The low-Mach-number approximation in the governing Navier-Stokes and energy equations is used to incorporate source terms due to heat release from chemical reactions, density variation, and nonluminous radiative heat loss. Additionally, the conservation equation for chemical species is formulated by including a source term due to chemical reaction. To validate the model, we consider the combustion of n-heptane and n-butanol droplets in stagnant air using overall single-step reactions. The diameter history and flame standoff ratio obtained from the proposed LB method are found to be in good agreement with available numerical and experimental data. The present LB scheme is believed to be a promising approach for modeling spray combustion.

  6. Study of travelling wave solutions for some special-type nonlinear evolution equations

    NASA Astrophysics Data System (ADS)

    Song, Junquan; Hu, Lan; Shen, Shoufeng; Ma, Wen-Xiu

    2018-07-01

    The tanh-function expansion method has been improved and used to construct travelling wave solutions of the form U={\\sum }j=0n{a}j{\\tanh }jξ for some special-type nonlinear evolution equations, which have a variety of physical applications. The positive integer n can be determined by balancing the highest order linear term with the nonlinear term in the evolution equations. We improve the tanh-function expansion method with n = 0 by introducing a new transform U=-W\\prime (ξ )/{W}2. A nonlinear wave equation with source terms, and mKdV-type equations, are considered in order to show the effectiveness of the improved scheme. We also propose the tanh-function expansion method of implicit function form, and apply it to a Harry Dym-type equation as an example.

  7. Ragweed (Ambrosia) pollen source inventory for Austria.

    PubMed

    Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F

    2015-08-01

    This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  8. Apparatus And Method For Osl-Based, Remote Radiation Monitoring And Spectrometry

    DOEpatents

    Miller, Steven D.; Smith, Leon Eric; Skorpik, James R.

    2006-03-07

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  9. Apparatus and method for OSL-based, remote radiation monitoring and spectrometry

    DOEpatents

    Smith, Leon Eric [Richland, WA; Miller, Steven D [Richland, WA; Bowyer, Theodore W [Oakton, VA

    2008-05-20

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  10. Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)

    NASA Astrophysics Data System (ADS)

    Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.

    2013-07-01

    Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.

  11. Hyperbolic conservation laws and numerical methods

    NASA Technical Reports Server (NTRS)

    Leveque, Randall J.

    1990-01-01

    The mathematical structure of hyperbolic systems and the scalar equation case of conservation laws are discussed. Linear, nonlinear systems and the Riemann problem for the Euler equations are also studied. The numerical methods for conservation laws are presented in a nonstandard manner which leads to large time steps generalizations and computations on irregular grids. The solution of conservation laws with stiff source terms is examined.

  12. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  13. Emergency Preparedness technology support to the Health and Safety Executive (HSE), Nuclear Installations Inspectorate (NII) of the United Kingdom. Appendix A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Kula, K.R.

    1994-03-01

    The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less

  14. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  15. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  16. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.

    PubMed

    Li, Le; Yip, Kevin Y

    2016-12-15

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.

  17. The RATIO method for time-resolved Laue crystallography

    PubMed Central

    Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334

  18. Investigation of remote sensing techniques as inputs to operational resource management models. [South Dakota

    NASA Technical Reports Server (NTRS)

    Schmer, F. A. (Principal Investigator); Isakson, R. E.; Eidenshink, J. C.

    1977-01-01

    The author has identified the following significant results. Successful operational applications of LANDSAT data were found for level 1 land use mapping, drainage network delineation, and aspen mapping. Visual LANDSAT interpretation using 1:125,000 color composite imagery was the least expensive method of obtaining timely level 1 land use data. With an average agricultural/rangeland interpretation accuracy in excess of 80%, such a data source was considered the most cost effective of those sources available to state agencies. Costs do not compare favorably with those incurred using the present method of extracting land use data from historical tabular summaries. The cost increase in advancing from the present procedure to a satellite-based data source was justified in terms of expanded data content.

  19. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  20. A New Generation of Leaching Tests – The Leaching Environmental Assessment Framework

    EPA Science Inventory

    Provides an overview of newly released leaching tests that provide a more accurate source term when estimating environmental release of metals and other constituents of potential concern (COPCs). The Leaching Environmental Assessment Framework (LEAF) methods have been (1) develo...

  1. Increasing Confidence In Treatment Performance Assessment Using Geostatistical Methods

    EPA Science Inventory

    It is well established that the presence of dense non-aqueous phase liquids (DNAPLs) such as trichloroethylene (TCE) in aquifer systems represents a very long-term source of groundwater contamination. Significant effort in recent years has been focussed on developing effective me...

  2. Long-Term Frozen Storage of Urine Samples: A Trouble to Get PCR Results in Schistosoma spp. DNA Detection?

    PubMed Central

    Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio

    2013-01-01

    Background Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples’ storage or conditions for handling and DNA preservation and extraction methods. Methodology/Principal Findings We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patientś urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Conclusions/Significance Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used. PMID:23613907

  3. Long-term frozen storage of urine samples: a trouble to get PCR results in Schistosoma spp. DNA detection?

    PubMed

    Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio

    2013-01-01

    Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples' storage or conditions for handling and DNA preservation and extraction methods. We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patients urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used.

  4. Childhood lead poisoning - United States: report to the Congress by the Agency for Toxic Substances and Disease Registry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The long-term consequences of unabated exposures to environmental lead sources can be serious, particularly for children. Recent scientific studies have shown a progressive decline in the lowest exposure levels of lead at which adverse effects can be reliably detected in children. In recognition of this, Congress directed the Agency for Toxic Substances and Disease Registry (ATSDR), in consultation with the Environmental Protection Agency (EPA), to examine the nature and extent of childhood lead poisoning in the United States. The study was to address such areas as the long-term health implications of environmental lead exposure in children, the extent of leadmore » intoxication of children in terms of geographic areas and sources of lead in the United States, and methods and strategies for removing lead from the environment of US children. This article summarizes the key findings of the report.« less

  5. A Multigroup Method for the Calculation of Neutron Fluence with a Source Term

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Clowdsley, M. S.

    1998-01-01

    Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.

  6. Data and methods for studying commercial motor vehicle driver fatigue, highway safety and long-term driver health.

    PubMed

    Stern, Hal S; Blower, Daniel; Cohen, Michael L; Czeisler, Charles A; Dinges, David F; Greenhouse, Joel B; Guo, Feng; Hanowski, Richard J; Hartenbaum, Natalie P; Krueger, Gerald P; Mallis, Melissa M; Pain, Richard F; Rizzo, Matthew; Sinha, Esha; Small, Dylan S; Stuart, Elizabeth A; Wegman, David H

    2018-03-09

    This article summarizes the recommendations on data and methodology issues for studying commercial motor vehicle driver fatigue of a National Academies of Sciences, Engineering, and Medicine study. A framework is provided that identifies the various factors affecting driver fatigue and relating driver fatigue to crash risk and long-term driver health. The relevant factors include characteristics of the driver, vehicle, carrier and environment. Limitations of existing data are considered and potential sources of additional data described. Statistical methods that can be used to improve understanding of the relevant relationships from observational data are also described. The recommendations for enhanced data collection and the use of modern statistical methods for causal inference have the potential to enhance our understanding of the relationship of fatigue to highway safety and to long-term driver health. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Evaluation of the Pivot Profile©, a new method to characterize a large variety of a single product: Case study on honeys from around the world.

    PubMed

    Deneulin, Pascale; Reverdy, Caroline; Rébénaque, Pierrick; Danthe, Eve; Mulhauser, Blaise

    2018-04-01

    Honey is a natural product with very diverse sensory attributes that are influenced by the flower source, the bee species, the geographic origin, the treatments and conditions during storage. This study aimed at describing 50 honeys from diverse flower sources in different continents and islands, stored under various conditions. Many articles have been published on the sensory characterization of honeys, thus a common list of attributes has been established, but it appeared to be poorly suited to describe a large number of honeys from around the world. This is why the novel and rapid sensory evaluation method, the Pivot Profile©, was tested, with the participation of 15 panelists during five sessions. The first objective was to obtain a sensory description of the 50 honeys that were tested. From 1152 distinct terms, a list of 29 sensory attributes was established and the attributes divided into three categories: color/texture (8 terms), aroma (16 terms), and taste (5 terms). At first, the honeys have been ranked according to their level of crystallization from fluid/liquid to viscous/hard. Then color was the second assessment factor of the variability. In terms of aroma, honeys from Africa were characterized by smoky, resin, caramel and dried fruit as opposed to floral and fruity, mainly for honeys from South America and Europe. Finally, the honeys were ranked according to their sweetness. The second objective of this study was to test the new sensory method, called Pivot Profile© which is used to describe a large number of products with interpretable results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  9. Monitoring of diesel engine combustions based on the acoustic source characterisation of the exhaust system

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Gu, F.; Gennish, R.; Moore, D. J.; Harris, G.; Ball, A. D.

    2008-08-01

    Acoustic methods are among the most useful techniques for monitoring the condition of machines. However, the influence of background noise is a major issue in implementing this method. This paper introduces an effective monitoring approach to diesel engine combustion based on acoustic one-port source theory and exhaust acoustic measurements. It has been found that the strength, in terms of pressure, of the engine acoustic source is able to provide a more accurate representation of the engine combustion because it is obtained by minimising the reflection effects in the exhaust system. A multi-load acoustic method was then developed to determine the pressure signal when a four-cylinder diesel engine was tested with faults in the fuel injector and exhaust valve. From the experimental results, it is shown that a two-load acoustic method is sufficient to permit the detection and diagnosis of abnormalities in the pressure signal, caused by the faults. This then provides a novel and yet reliable method to achieve condition monitoring of diesel engines even if they operate in high noise environments such as standby power stations and vessel chambers.

  10. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  11. Recent Advances in Laplace Transform Analytic Element Method (LT-AEM) Theory and Application to Transient Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Kuhlman, K. L.; Neuman, S. P.

    2006-12-01

    Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.

  12. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    PubMed Central

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  13. Solution of Grad-Shafranov equation by the method of fundamental solutions

    NASA Astrophysics Data System (ADS)

    Nath, D.; Kalra, M. S.; Kalra

    2014-06-01

    In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.

  14. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  15. Real-time calibration-free C-scan images of the eye fundus using Master Slave swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.

    2015-03-01

    Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.

  16. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  17. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  18. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  19. An imaging-based photometric and colorimetric measurement method for characterizing OLED panels for lighting applications

    NASA Astrophysics Data System (ADS)

    Zhu, Yiting; Narendran, Nadarajah; Tan, Jianchuan; Mou, Xi

    2014-09-01

    The organic light-emitting diode (OLED) has demonstrated its novelty in displays and certain lighting applications. Similar to white light-emitting diode (LED) technology, it also holds the promise of saving energy. Even though the luminous efficacy values of OLED products have been steadily growing, their longevity is still not well understood. Furthermore, currently there is no industry standard for photometric and colorimetric testing, short and long term, of OLEDs. Each OLED manufacturer tests its OLED panels under different electrical and thermal conditions using different measurement methods. In this study, an imaging-based photometric and colorimetric measurement method for OLED panels was investigated. Unlike an LED that can be considered as a point source, the OLED is a large form area source. Therefore, for an area source to satisfy lighting application needs, it is important that it maintains uniform light level and color properties across the emitting surface of the panel over a long period. This study intended to develop a measurement procedure that can be used to test long-term photometric and colorimetric properties of OLED panels. The objective was to better understand how test parameters such as drive current or luminance and temperature affect the degradation rate. In addition, this study investigated whether data interpolation could allow for determination of degradation and lifetime, L70, at application conditions based on the degradation rates measured at different operating conditions.

  20. Atomic processes and equation of state of high Z plasmas for EUV sources and their effects on the spatial and temporal evolution of the plasmas

    NASA Astrophysics Data System (ADS)

    Sasaki, Akira; Sunahara, Atushi; Furukawa, Hiroyuki; Nishihara, Katsunobu; Nishikawa, Takeshi; Koike, Fumihiro

    2016-03-01

    Laser-produced plasma (LPP) extreme ultraviolet (EUV) light sources have been intensively investigated due to potential application to next-generation semiconductor technology. Current studies focus on the atomic processes and hydrodynamics of plasmas to develop shorter wavelength sources at λ = 6.x nm as well as to improve the conversion efficiency (CE) of λ = 13.5 nm sources. This paper examines the atomic processes of mid-z elements, which are potential candidates for λ = 6.x nm source using n=3-3 transitions. Furthermore, a method to calculate the hydrodynamics of the plasmas in terms of the initial interaction between a relatively weak prepulse laser is presented.

  1. [Comments on "A practical dictionary of Chinese medicine" by Wiseman].

    PubMed

    Lan, Feng-li

    2006-02-01

    At least 24 Chinese-English dictionaries of Chinese Medicine have been published in China during the recent 24 years (1984-2003). This thesis comments on "A Practical Dictionary of Chinese Medicine" by Wiseman, agreeing on its establishing principles, sources and formation methods of the English system of Chinese medical terminology, and pointing out the defect. The author holds that study on the origin and development of TCM terms, standardization of Chinese medical terms in different layers, i.e. Chinese medical in classic, in commonly used modern TCM terms, and integrative medical texts, are prerequisites to the standardization of English translation of Chinese medical terms.

  2. Evaluation of Intercontinental Transport of Ozone Using Full-tagged, Tagged-N and Sensitivity Methods

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Liu, J.; Mauzerall, D. L.; Emmons, L. K.; Horowitz, L. W.; Fan, S.; Li, X.; Tao, S.

    2014-12-01

    Long-range transport of ozone is of great concern, yet the source-receptor relationships derived previously depend strongly on the source attribution techniques used. Here we describe a new tagged ozone mechanism (full-tagged), the design of which seeks to take into account the combined effects of emissions of ozone precursors, CO, NOx and VOCs, from a particular source, while keeping the current state of chemical equilibrium unchanged. We label emissions from the target source (A) and background (B). When two species from A and B sources react with each other, half of the resulting products are labeled A, and half B. Thus the impact of a given source on downwind regions is recorded through tagged chemistry. We then incorporate this mechanism into the Model for Ozone and Related chemical Tracers (MOZART-4) to examine the impact of anthropogenic emissions within North America, Europe, East Asia and South Asia on ground-level ozone downwind of source regions during 1999-2000. We compare our results with two previously used methods -- the sensitivity and tagged-N approaches. The ozone attributed to a given source by the full-tagged method is more widely distributed spatially, but has weaker seasonal variability than that estimated by the other methods. On a seasonal basis, for most source/receptor pairs, the full-tagged method estimates the largest amount of tagged ozone, followed by the sensitivity and tagged-N methods. In terms of trans-Pacific influence of ozone pollution, the full-tagged method estimates the strongest impact of East Asian (EA) emissions on the western U.S. (WUS) in MAM and JJA (~3 ppbv), which is substantially different in magnitude and seasonality from tagged-N and sensitivity studies. This difference results from the full-tagged method accounting for the maintenance of peroxy radicals (e.g., CH3O2, CH3CO3, and HO2), in addition to NOy, as effective reservoirs of EA source impact across the Pacific, allowing for a significant contribution to ozone formation over WUS (particularly in summer). Thus, the full-tagged method, with its clear discrimination of source and background contributions on a per-reaction basis, provides unique insights into the critical role of VOCs (and additional reactive nitrogen species) in determining the nonlinear inter-continental influence of ozone pollution.

  3. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  4. Mass transfer apparatus and method for separation of gases

    DOEpatents

    Blount, Gerald C.

    2015-10-13

    A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.

  5. Mass transfer apparatus and method for separation of gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blount, Gerald C.; Gorensek, Maximilian Boris; Hamm, Luther L.

    A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.

  6. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  7. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  8. The solution of three-variable duct-flow equations

    NASA Technical Reports Server (NTRS)

    Stuart, A. R.; Hetherington, R.

    1974-01-01

    This paper establishes a numerical method for the solution of three-variable problems and is applied here to rotational flows through ducts of various cross sections. An iterative scheme is developed, the main feature of which is the addition of a duplicate variable to the forward component of velocity. Two forward components of velocity result from integrating two sets of first order ordinary differential equations for the streamline curvatures, in intersecting directions across the duct. Two pseudo-continuity equations are introduced with source/sink terms, whose strengths are dependent on the difference between the forward components of velocity. When convergence is obtained, the two forward components of velocity are identical, the source/sink terms are zero, and the original equations are satisfied. A computer program solves the exact equations and boundary conditions numerically. The method is economical and compares successfully with experiments on bent ducts of circular and rectangular cross section where secondary flows are caused by gradients of total pressure upstream.

  9. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  10. Assessment Methods of Groundwater Overdraft Area and Its Application

    NASA Astrophysics Data System (ADS)

    Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun

    2018-05-01

    Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.

  11. Effort-reward imbalance and its association with health among permanent and fixed-term workers

    PubMed Central

    2010-01-01

    Background In the past decade, the changing labor market seems to have rejected the traditional standards employment and has begun to support a variety of non-standard forms of work in their place. The purpose of our study was to compare the degree of job stress, sources of job stress, and association of high job stress with health among permanent and fixed-term workers. Methods Our study subjects were 709 male workers aged 30 to 49 years in a suburb of Tokyo, Japan. In 2008, we conducted a cross-sectional study to compare job stress using an effort-reward imbalance (ERI) model questionnaire. Lifestyles, subjective symptoms, and body mass index were also observed from the 2008 health check-up data. Results The rate of job stress of the high-risk group measured by ERI questionnaire was not different between permanent and fixed-term workers. However, the content of the ERI components differed. Permanent workers were distressed more by effort, overwork, or job demand, while fixed-term workers were distressed more by their job insecurity. Moreover, higher ERI was associated with existence of subjective symptoms (OR = 2.07, 95% CI: 1.42-3.03) and obesity (OR = 2.84, 95% CI:1.78-4.53) in fixed-term workers while this tendency was not found in permanent workers. Conclusions Our study showed that workers with different employment types, permanent and fixed-term, have dissimilar sources of job stress even though their degree of job stress seems to be the same. High ERI was associated with existing subjective symptoms and obesity in fixed-term workers. Therefore, understanding different sources of job stress and their association with health among permanent and fixed-term workers should be considered to prevent further health problems. PMID:21054838

  12. Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry

    NASA Technical Reports Server (NTRS)

    Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve

    1996-01-01

    A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.

  13. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  14. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  15. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  16. Inferring the nature of anthropogenic threats from long-term abundance records.

    PubMed

    Shoemaker, Kevin T; Akçakaya, H Resit

    2015-02-01

    Diagnosing the processes that threaten species persistence is critical for recovery planning and risk forecasting. Dominant threats are typically inferred by experts on the basis of a patchwork of informal methods. Transparent, quantitative diagnostic tools would contribute much-needed consistency, objectivity, and rigor to the process of diagnosing anthropogenic threats. Long-term census records, available for an increasingly large and diverse set of taxa, may exhibit characteristic signatures of specific threatening processes and thereby provide information for threat diagnosis. We developed a flexible Bayesian framework for diagnosing threats on the basis of long-term census records and diverse ancillary sources of information. We tested this framework with simulated data from artificial populations subjected to varying degrees of exploitation and habitat loss and several real-world abundance time series for which threatening processes are relatively well understood: bluefin tuna (Thunnus maccoyii) and Atlantic cod (Gadus morhua) (exploitation) and Red Grouse (Lagopus lagopus scotica) and Eurasian Skylark (Alauda arvensis) (habitat loss). Our method correctly identified the process driving population decline for over 90% of time series simulated under moderate to severe threat scenarios. Successful identification of threats approached 100% for severe exploitation and habitat loss scenarios. Our method identified threats less successfully when threatening processes were weak and when populations were simultaneously affected by multiple threats. Our method selected the presumed true threat model for all real-world case studies, although results were somewhat ambiguous in the case of the Eurasian Skylark. In the latter case, incorporation of an ancillary source of information (records of land-use change) increased the weight assigned to the presumed true model from 70% to 92%, illustrating the value of the proposed framework in bringing diverse sources of information into a common rigorous framework. Ultimately, our framework may greatly assist conservation organizations in documenting threatening processes and planning species recovery. © 2014 Society for Conservation Biology.

  17. A continuous time random walk (CTRW) integro-differential equation with chemical interaction

    NASA Astrophysics Data System (ADS)

    Ben-Zvi, Rami; Nissan, Alon; Scher, Harvey; Berkowitz, Brian

    2018-01-01

    A nonlocal-in-time integro-differential equation is introduced that accounts for close coupling between transport and chemical reaction terms. The structure of the equation contains these terms in a single convolution with a memory function M ( t), which includes the source of non-Fickian (anomalous) behavior, within the framework of a continuous time random walk (CTRW). The interaction is non-linear and second-order, relevant for a bimolecular reaction A + B → C. The interaction term ΓP A ( s, t) P B ( s, t) is symmetric in the concentrations of A and B (i.e. P A and P B ); thus the source terms in the equations for A, B and C are similar, but with a change in sign for that of C. Here, the chemical rate coefficient, Γ, is constant. The fully coupled equations are solved numerically using a finite element method (FEM) with a judicious representation of M ( t) that eschews the need for the entire time history, instead using only values at the former time step. To begin to validate the equations, the FEM solution is compared, in lieu of experimental data, to a particle tracking method (CTRW-PT); the results from the two approaches, particularly for the C profiles, are in agreement. The FEM solution, for a range of initial and boundary conditions, can provide a good model for reactive transport in disordered media.

  18. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  19. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  20. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  1. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  2. Optical remote sensing to quantify fugitive particulate mass emissions from stationary short-term and mobile continuous sources: part II. Field applications.

    PubMed

    Du, Ke; Yuen, Wangki; Wang, Wei; Rood, Mark J; Varma, Ravi M; Hashmonay, Ram A; Kim, Byung J; Kemme, Michael R

    2011-01-15

    Quantification of emissions of fugitive particulate matter (PM) into the atmosphere from military training operations is of interest by the United States Department of Defense. A new range-resolved optical remote sensing (ORS) method was developed to quantify fugitive PM emissions from puff sources (i.e., artillery back blasts), ground-level mobile sources (i.e., movement of tracked vehicles), and elevated mobile sources (i.e., airborne helicopters) in desert areas that are prone to generating fugitive dust plumes. Real-time, in situ mass concentration profiles for PM mass with particle diameters <10 μm (PM(10)) and <2.5 μm (PM(2.5)) were obtained across the dust plumes that were generated by these activities with this new method. Back blasts caused during artillery firing were characterized as a stationary short-term puff source whose plumes typically dispersed to <10 m above the ground with durations of 10-30 s. Fugitive PM emissions caused by artillery back blasts were related to the zone charge and ranged from 51 to 463 g PM/firing for PM(10) and 9 to 176 g PM/firing for PM(2.5). Movement of tracked vehicles and flying helicopters was characterized as mobile continuous sources whose plumes typically dispersed 30-50 m above the ground with durations of 100-200 s. Fugitive PM emissions caused by moving tracked vehicles ranged from 8.3 to 72.5 kg PM/km for PM(10) and 1.1 to 17.2 kg PM/km for PM(2.5), and there was no obvious correlation between PM emission and vehicle speed. The emission factor for the helicopter flying at 3 m above the ground ranged from 14.5 to 114.1 kg PM/km for PM(10) and 5.0 to 39.5 kg PM/km for PM(2.5), depending on the velocity of the helicopter and type of soil it flies over. Fugitive PM emissions by an airborne helicopter were correlated with helicopter speed for a particular soil type. The results from this range-resolved ORS method were also compared with the data obtained with another path-integrated ORS method and a Flux Tower method.

  3. A comprehensive classification method for VOC emission sources to tackle air pollution based on VOC species reactivity and emission amounts.

    PubMed

    Li, Guohao; Wei, Wei; Shao, Xia; Nie, Lei; Wang, Hailin; Yan, Xiao; Zhang, Rui

    2018-05-01

    In China, volatile organic compound (VOC) control directives have been continuously released and implemented for important sources and regions to tackle air pollution. The corresponding control requirements were based on VOC emission amounts (EA), but never considered the significant differentiation of VOC species in terms of atmospheric chemical reactivity. This will adversely influence the effect of VOC reduction on air quality improvement. Therefore, this study attempted to develop a comprehensive classification method for typical VOC sources in the Beijing-Tianjin-Hebei region (BTH), by combining the VOC emission amounts with the chemical reactivities of VOC species. Firstly, we obtained the VOC chemical profiles by measuring 5 key sources in the BTH region and referencing another 10 key sources, and estimated the ozone formation potential (OFP) per ton VOC emission for these sources by using the maximum incremental reactivity (MIR) index as the characteristic of source reactivity (SR). Then, we applied the data normalization method to respectively convert EA and SR to normalized EA (NEA) and normalized SR (NSR) for various sources in the BTH region. Finally, the control index (CI) was calculated, and these sources were further classified into four grades based on the normalized CI (NCI). The study results showed that in the BTH region, furniture coating, automobile coating, and road vehicles are characterized by high NCI and need to be given more attention; however, the petro-chemical industry, which was designated as an important control source by air quality managers, has a lower NCI. Copyright © 2017. Published by Elsevier B.V.

  4. REVIEW OF VOLATILE ORGANIC COMPOUND SOURCE APPORTIONMENT BY CHEMICAL MASS BALANCE. (R826237)

    EPA Science Inventory

    The chemical mass balance (CMB) receptor model has apportioned volatile organic compounds (VOCs) in more than 20 urban areas, mostly in the United States. These applications differ in terms of the total fraction apportioned, the calculation method, the chemical compounds used ...

  5. Discrimination of particulate matter emission sources using stochastic methods

    NASA Astrophysics Data System (ADS)

    Szczurek, Andrzej; Maciejewska, Monika; Wyłomańska, Agnieszka; Sikora, Grzegorz; Balcerek, Michał; Teuerle, Marek

    2016-12-01

    Particulate matter (PM) is one of the criteria pollutants which has been determined as harmful to public health and the environment. For this reason the ability to recognize its emission sources is very important. There are a number of measurement methods which allow to characterize PM in terms of concentration, particles size distribution, and chemical composition. All these information are useful to establish a link between the dust found in the air, its emission sources and influence on human as well as the environment. However, the methods are typically quite sophisticated and not applicable outside laboratories. In this work, we considered PM emission source discrimination method which is based on continuous measurements of PM concentration with a relatively cheap instrument and stochastic analysis of the obtained data. The stochastic analysis is focused on the temporal variation of PM concentration and it involves two steps: (1) recognition of the category of distribution for the data i.e. stable or the domain of attraction of stable distribution and (2) finding best matching distribution out of Gaussian, stable and normal-inverse Gaussian (NIG). We examined six PM emission sources. They were associated with material processing in industrial environment, namely machining and welding aluminum, forged carbon steel and plastic with various tools. As shown by the obtained results, PM emission sources may be distinguished based on statistical distribution of PM concentration variations. Major factor responsible for the differences detectable with our method was the type of material processing and the tool applied. In case different materials were processed by the same tool the distinction of emission sources was difficult. For successful discrimination it was crucial to consider size-segregated mass fraction concentrations. In our opinion the presented approach is very promising. It deserves further study and development.

  6. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  7. Numerical models analysis of energy conversion process in air-breathing laser propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Yanji; Song Junling; Cui Cunyan

    Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.

  8. Parameter Measurement Methods for Interfacing Hydraulic Systems with Microelectronic Instruments and Controllers.

    DTIC Science & Technology

    1983-11-01

    successfully. I- Accession For NTIS -GO iiiONa DTIC TAB t Unannounced - Justificatio Distribution/ I Availability Codes vail and/or DIst Special IA-11...terms of initial signal power. An active sensor must be excited externally. Such a sensor receives its power from an external source and merely modulates...electrons in the material to gain L enough energy to be emitted. The voltage source causes a positive potential to be felt on the collector, thus causing the

  9. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms

    PubMed Central

    Li, Le; Yip, Kevin Y.

    2016-01-01

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature.Availability: Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/. PMID:27976738

  10. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  11. International Conference on Numerical Ship Hydrodynamics (5th) Held in Hiroshima, Japan on 24-28 September 1989

    DTIC Science & Technology

    1989-09-28

    Introduction source. The near field part N has an integrand which is in terms of the higher order derived exponential integral func- For a number of...Methods for potential produced improved results near the flow calculations including first and stern, but none of them could accura- higher order theories ...method Naghdi method applied to the nonlinear free- in laminar boundary layer theory . I think the surface flow problems. higher theory Green-Naghdi

  12. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  13. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  14. Finite-element solutions for geothermal systems

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Conel, J. E.

    1977-01-01

    Vector potential and scalar potential are used to formulate the governing equations for a single-component and single-phase geothermal system. By assuming an initial temperature field, the fluid velocity can be determined which, in turn, is used to calculate the convective heat transfer. The energy equation is then solved by considering convected heat as a distributed source. Using the resulting temperature to compute new source terms, the final results are obtained by iterations of the procedure. Finite-element methods are proposed for modeling of realistic geothermal systems; the advantages of such methods are discussed. The developed methodology is then applied to a sample problem. Favorable agreement is obtained by comparisons with a previous study.

  15. Medical Subject Headings (MeSH) for indexing and retrieving open-source healthcare data.

    PubMed

    Marc, David T; Khairat, Saif S

    2014-01-01

    The US federal government initiated the Open Government Directive where federal agencies are required to publish high value datasets so that they are available to the public. Data.gov and the community site Healthdata.gov were initiated to disperse such datasets. However, data searches and retrieval for these sites are keyword driven and severely limited in performance. The purpose of this paper is to address the issue of extracting relevant open-source data by proposing a method of adopting the MeSH framework for indexing and data retrieval. A pilot study was conducted to compare the performance of traditional keywords to MeSH terms for retrieving relevant open-source datasets related to "mortality". The MeSH framework resulted in greater sensitivity with comparable specificity to the keyword search. MeSH showed promise as a method for indexing and retrieving data, yet future research should conduct a larger scale evaluation of the performance of the MeSH framework for retrieving relevant open-source healthcare datasets.

  16. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Discriminating Simulated Vocal Tremor Source Using Amplitude Modulation Spectra

    PubMed Central

    Carbonell, Kathy M.; Lester, Rosemary A.; Story, Brad H.; Lotto, Andrew J.

    2014-01-01

    Objectives/Hypothesis Sources of vocal tremor are difficult to categorize perceptually and acoustically. This paper describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. Study Design Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. Methods Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor) and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. Results The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. Conclusions These results supply initial support for an amplitude-envelope based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope. PMID:25532813

  18. Introduction to Agricultural Marketing.

    ERIC Educational Resources Information Center

    Futrell, Gene; And Others

    This marketing unit focuses on the importance of forecasting in order for a farm family to develop marketing plans. It describes sources of information and includes a glossary of marketing terms and exercises using both fundamental and technical methods to predict prices in order to improve forecasting ability. The unit is organized in the…

  19. Hydrogen: A Future Energy Mediator?

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1975

    1975-01-01

    Hydrogen may be the fuel to help the United States to a non fossil energy source. Although hydrogen may not be widely used as a fuel until after the turn of the century, special applications may become feasible in the short term. Costs, uses, safety, and production methods are discussed. (BT)

  20. NEXT GENERATION LEACHING TESTS FOR EVALUATING LEACHING OF INORGANIC CONSTITUENTS

    EPA Science Inventory

    In the U.S. as in other countries, there is increased interest in using industrial by-products as alternative or secondary materials, helping to conserve virgin or raw materials. The LEAF and associated test methods are being used to develop the source term for leaching or any i...

  1. Remotely measuring populations during a crisis by overlaying two data sources

    PubMed Central

    Bharti, Nita; Lu, Xin; Bengtsson, Linus; Wetter, Erik; Tatem, Andrew J.

    2015-01-01

    Background Societal instability and crises can cause rapid, large-scale movements. These movements are poorly understood and difficult to measure but strongly impact health. Data on these movements are important for planning response efforts. We retrospectively analyzed movement patterns surrounding a 2010 humanitarian crisis caused by internal political conflict in Côte d'Ivoire using two different methods. Methods We used two remote measures, nighttime lights satellite imagery and anonymized mobile phone call detail records, to assess average population sizes as well as dynamic population changes. These data sources detect movements across different spatial and temporal scales. Results The two data sources showed strong agreement in average measures of population sizes. Because the spatiotemporal resolution of the data sources differed, we were able to obtain measurements on long- and short-term dynamic elements of populations at different points throughout the crisis. Conclusions Using complementary, remote data sources to measure movement shows promise for future use in humanitarian crises. We conclude with challenges of remotely measuring movement and provide suggestions for future research and methodological developments. PMID:25733558

  2. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  3. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  4. Past speculations of the future: a review of the methods used for forecasting emerging health technologies

    PubMed Central

    Doos, Lucy; Packer, Claire; Ward, Derek; Simpson, Sue; Stevens, Andrew

    2016-01-01

    Objectives Forecasting can support rational decision-making around the introduction and use of emerging health technologies and prevent investment in technologies that have limited long-term potential. However, forecasting methods need to be credible. We performed a systematic search to identify the methods used in forecasting studies to predict future health technologies within a 3–20-year timeframe. Identification and retrospective assessment of such methods potentially offer a route to more reliable prediction. Design Systematic search of the literature to identify studies reported on methods of forecasting in healthcare. Participants People are not needed in this study. Data sources The authors searched MEDLINE, EMBASE, PsychINFO and grey literature sources, and included articles published in English that reported their methods and a list of identified technologies. Main outcome measure Studies reporting methods used to predict future health technologies within a 3–20-year timeframe with an identified list of individual healthcare technologies. Commercially sponsored reviews, long-term futurology studies (with over 20-year timeframes) and speculative editorials were excluded. Results 15 studies met our inclusion criteria. Our results showed that the majority of studies (13/15) consulted experts either alone or in combination with other methods such as literature searching. Only 2 studies used more complex forecasting tools such as scenario building. Conclusions The methodological fundamentals of formal 3–20-year prediction are consistent but vary in details. Further research needs to be conducted to ascertain if the predictions made were accurate and whether accuracy varies by the methods used or by the types of technologies identified. PMID:26966060

  5. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  6. Methods for the behavioral, educational, and social sciences: an R package.

    PubMed

    Kelley, Ken

    2007-11-01

    Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.

  7. 77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...

  8. Building Large Collections of Chinese and English Medical Terms from Semi-Structured and Encyclopedia Websites

    PubMed Central

    Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric

    2013-01-01

    To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (), Object Recall (), and Surface Head recall (). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available. PMID:23874426

  9. Building large collections of Chinese and English medical terms from semi-structured and encyclopedia websites.

    PubMed

    Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric

    2013-01-01

    To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (R(S)), Object Recall (R(O)), and Surface Head recall (R(H)). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available.

  10. Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Islam, A.; Wheeler, M.

    2016-12-01

    Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.

  11. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  12. Statistical Characterization of Environmental Error Sources Affecting Electronically Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Walker, Eric L.; Everhart, Joel L.

    2006-01-01

    Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure [ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.

  13. Statistical Characterization of Environmental Error Sources Affecting Electronically Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Walker, Eric L.; Everhart, Joel L.

    2006-01-01

    Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure (ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.

  14. Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.

  15. Information theoretic approach for assessing image fidelity in photon-counting arrays.

    PubMed

    Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram

    2010-02-01

    The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.

  16. Glossary of reference terms for alternative test methods and their validation.

    PubMed

    Ferrario, Daniele; Brustio, Roberta; Hartung, Thomas

    2014-01-01

    This glossary was developed to provide technical references to support work in the field of the alternatives to animal testing. It was compiled from various existing reference documents coming from different sources and is meant to be a point of reference on alternatives to animal testing. Giving the ever-increasing number of alternative test methods and approaches being developed over the last decades, a combination, revision, and harmonization of earlier published collections of terms used in the validation of such methods is required. The need to update previous glossary efforts came from the acknowledgement that new words have emerged with the development of new approaches, while others have become obsolete, and the meaning of some terms has partially changed over time. With this glossary we intend to provide guidance on issues related to the validation of new or updated testing methods consistent with current approaches. Moreover, because of new developments and technologies, a glossary needs to be a living, constantly updated document. An Internet-based version based on this compilation may be found at http://altweb.jhsph.edu/, allowing the addition of new material.

  17. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  18. Radiological analysis of plutonium glass batches with natural/enriched boron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    2000-06-22

    The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less

  19. Analysis of CO2 trapping capacities and long-term migration for geological formations in the Norwegian North Sea using MRST-co2lab

    NASA Astrophysics Data System (ADS)

    Møll Nilsen, Halvor; Lie, Knut-Andreas; Andersen, Odd

    2015-06-01

    MRST-co2lab is a collection of open-source computational tools for modeling large-scale and long-time migration of CO2 in conductive aquifers, combining ideas from basin modeling, computational geometry, hydrology, and reservoir simulation. Herein, we employ the methods of MRST-co2lab to study long-term CO2 storage on the scale of hundreds of megatonnes. We consider public data sets of two aquifers from the Norwegian North Sea and use geometrical methods for identifying structural traps, percolation-type methods for identifying potential spill paths, and vertical-equilibrium methods for efficient simulation of structural, residual, and solubility trapping in a thousand-year perspective. In particular, we investigate how data resolution affects estimates of storage capacity and discuss workflows for identifying good injection sites and optimizing injection strategies.

  20. Assessment of Technologies for the Space Shuttle External Tank Thermal Protection System and Recommendations for Technology Improvement - Part III: Material Property Characterization, Analysis, and Test Methods

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Johnson, Theodore F.; Whitley, Karen S.

    2005-01-01

    The objective of this report is to contribute to the independent assessment of the Space Shuttle External Tank Foam Material. This report specifically addresses material modeling, characterization testing, data reduction methods, and data pedigree. A brief description of the External Tank foam materials, locations, and standard failure modes is provided to develop suitable background information. A review of mechanics based analysis methods from the open literature is used to provide an assessment of the state-of-the-art in material modeling of closed cell foams. Further, this report assesses the existing material property database and investigates sources of material property variability. The report presents identified deficiencies in testing methods and procedures, recommendations for additional testing as required, identification of near-term improvements that should be pursued, and long-term capabilities or enhancements that should be developed.

  1. Magnetic potential, vector and gradient tensor fields of a tesseroid in a geocentric spherical coordinate system

    NASA Astrophysics Data System (ADS)

    Du, Jinsong; Chen, Chao; Lesur, Vincent; Lane, Richard; Wang, Huilin

    2015-06-01

    We examined the mathematical and computational aspects of the magnetic potential, vector and gradient tensor fields of a tesseroid in a geocentric spherical coordinate system (SCS). This work is relevant for 3-D modelling that is performed with lithospheric vertical scales and global, continent or large regional horizontal scales. The curvature of the Earth is significant at these scales and hence, a SCS is more appropriate than the usual Cartesian coordinate system (CCS). The 3-D arrays of spherical prisms (SP; `tesseroids') can be used to model the response of volumes with variable magnetic properties. Analytical solutions do not exist for these model elements and numerical or mixed numerical and analytical solutions must be employed. We compared various methods for calculating the response in terms of accuracy and computational efficiency. The methods were (1) the spherical coordinate magnetic dipole method (MD), (2) variants of the 3-D Gauss-Legendre quadrature integration method (3-D GLQI) with (i) different numbers of nodes in each of the three directions, and (ii) models where we subdivided each SP into a number of smaller tesseroid volume elements, (3) a procedure that we term revised Gauss-Legendre quadrature integration (3-D RGLQI) where the magnetization direction which is constant in a SCS is assumed to be constant in a CCS and equal to the direction at the geometric centre of each tesseroid, (4) the Taylor's series expansion method (TSE) and (5) the rectangular prism method (RP). In any realistic application, both the accuracy and the computational efficiency factors must be considered to determine the optimum approach to employ. In all instances, accuracy improves with increasing distance from the source. It is higher in the percentage terms for potential than the vector or tensor response. The tensor errors are the largest, but they decrease more quickly with distance from the source. In our comparisons of relative computational efficiency, we found that the magnetic potential takes less time to compute than the vector response, which in turn takes less time to compute than the tensor gradient response. The MD method takes less time to compute than either the TSE or RP methods. The efficiency of the (GLQI and) RGLQI methods depends on the polynomial order, but the response typically takes longer to compute than it does for the other methods. The optimum method is a complex function of the desired accuracy, the size of the volume elements, the element latitude and the distance between the source and the observation. For a model of global extent with typical model element size (e.g. 1 degree horizontally and 10 km radially) and observations at altitudes of 10s to 100s of km, a mixture of methods based on the horizontal separation of the source and observation separation would be the optimum approach. To demonstrate the RGLQI method described within this paper, we applied it to the computation of the response for a global magnetization model for observations at 300 and 30 km altitude.

  2. QCD sum rules study of meson-baryon sigma terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkol, Gueray; Oka, Makoto; Turan, Guersevil

    2008-11-01

    The pion-baryon sigma terms and the strange-quark condensates of the octet and the decuplet baryons are calculated by employing the method of QCD sum rules. We evaluate the vacuum-to-vacuum transition matrix elements of two baryon interpolating fields in an external isoscalar-scalar field and use a Monte Carlo-based approach to systematically analyze the sum rules and the uncertainties in the results. We extract the ratios of the sigma terms, which have rather high accuracy and minimal dependence on QCD parameters. We discuss the sources of uncertainties and comment on possible strangeness content of the nucleon and the Delta.

  3. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  4. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  5. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  6. A Curriculum for Teaching Human Sexuality to Mentally Impaired Adolescents.

    ERIC Educational Resources Information Center

    Rinckey, David Jason

    Presented is a developmentally sequenced curriculum designed for teaching human sexuality to mentally impaired adolescents. A brief objective is presented, teaching methods are listed, and materials needed are described (in terms of author, title, source, and price) for each of the following topic areas: vocabulary of sexuality; fact vs. myths;…

  7. A Language without Borders: English Slang and Bulgarian Learners of English

    ERIC Educational Resources Information Center

    Charkova, Krassimira D.

    2007-01-01

    This study investigated the acquisition of English slang in a foreign language context. The participants were 101 Bulgarian learners of English, 58 high school students, and 43 university students. The instrument included knowledge tests of English slang terms and questions about attitudes, sources, reasons, and methods employed in learning…

  8. Medicare Part D and the Nursing Home Setting

    ERIC Educational Resources Information Center

    Stevenson, David G.; Huskamp, Haiden A.; Newhouse, Joseph P.

    2008-01-01

    Purpose: The purpose of this article is to explore how the introduction of Medicare Part D is changing the operations of long-term-care pharmacies (LTCPs) and nursing homes, as well as implications of those changes for nursing home residents. Design and Methods: We reviewed existing sources of information and interviewed stakeholders across…

  9. CONTRIBUTIONS OF CURRENT YEAR PHOTOSYNTHATE TO FINE ROOTS ESTIMATED USING A 13C-DEPLETED CO2 SOURCE

    EPA Science Inventory

    The quantification of root turnover is necessary for a complete understanding of plant carbon (C) budgets, especially in terms of impacts of global climate change. To improve estimates of root turnover, we present a method to distinguish current- from prior-year allocation of ca...

  10. COMPARATIVE POTENCY METHOD FOR CANCER RISK ASSESSMENT: APPLICATION TO THE QUANTITATIVE ASSESSMENT OF THE CONTRIBUTION OF COMBUSTION EMISSIONS TO LUNG CANCER RISK

    EPA Science Inventory

    Combustion sources emit soot particles containing carcinogenic polycyclic organic compounds which are mutagenic in short-term genetic bioassays in microbial and mammalian cells and are tumorigenic in animals. Although soot is considered to be a human carcinogen, soots from differ...

  11. Sleep Disorders as a Risk to Language Learning and Use. EBP Briefs. Volume 10, Issue 1

    ERIC Educational Resources Information Center

    McGregor, Karla K.; Alper, Rebecca M.

    2015-01-01

    Clinical Question: Are people with sleep disorders at higher risk for language learning deficits than healthy sleepers? Method: Scoping Review. Study Sources: PubMed, Google Scholar, Trip Database, ClinicalTrials.gov. Search Terms: sleep disorders AND language AND learning; sleep disorders language learning--deprivation--epilepsy; sleep disorders…

  12. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  13. Comparison of three methods of solution to the inverse problem of groundwater hydrology for multiple pumping stimulation

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro

    2015-04-01

    The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.

  14. Gingival Retraction Methods: A Systematic Review.

    PubMed

    Tabassum, Sadia; Adnan, Samira; Khan, Farhan Raza

    2017-12-01

    The aim of this systematic review was to assess the gingival retraction methods in terms of the amount of gingival retraction achieved and changes observed in various clinical parameters: gingival index (GI), plaque index (PI), probing depth (PD), and attachment loss (AL). Data sources included three major databases, PubMed, CINAHL plus (Ebsco), and Cochrane, along with hand search. Search was made using the key terms in different permutations of gingival retraction* AND displacement method* OR technique* OR agents OR material* OR medicament*. The initial search results yielded 145 articles which were narrowed down to 10 articles using a strict eligibility criteria of including clinical trials or experimental studies on gingival retraction methods with the amount of tooth structure gained and assessment of clinical parameters as the outcomes conducted on human permanent teeth only. Gingival retraction was measured in 6/10 studies whereas the clinical parameters were assessed in 5/10 studies. The total number of teeth assessed in the 10 included studies was 400. The most common method used for gingival retraction was chemomechanical. The results were heterogeneous with regards to the outcome variables. No method seemed to be significantly superior to the other in terms of gingival retraction achieved. Clinical parameters were not significantly affected by the gingival retraction method. © 2016 by the American College of Prosthodontists.

  15. Numerical investigation of a modified family of centered schemes applied to multiphase equations with nonconservative sources

    NASA Astrophysics Data System (ADS)

    Crochet, M. W.; Gonthier, K. A.

    2013-12-01

    Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.

  16. The aromatic amino acids biosynthetic pathway: A core platform for products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lievense, J.C.; Frost, J.W.

    The aromatic amino acids biosynthetic pathway is viewed conventionally and primarily as the source of the amino acids L-tyrosine, L-phenylalanine. The authors have recognized the expanded role of the pathway as the major source of aromatic raw materials on earth. With the development of metabolic engineering approaches, it is now possible to biosynthesize a wide variety of aromatic compounds from inexpensive, clean, abundant, renewable sugars using fermentation methods. Examples of already and soon-to-be commercialized biosynthesis of such compounds are described. The long-term prospects are also assessed.

  17. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  18. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  19. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  20. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  1. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  2. Numerical solutions of the complete Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1993-01-01

    The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.

  3. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  4. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  5. Relevance analysis and short-term prediction of PM2.5 concentrations in Beijing based on multi-source data

    NASA Astrophysics Data System (ADS)

    Ni, X. Y.; Huang, H.; Du, W. P.

    2017-02-01

    The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.

  6. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  7. Probing the nature of AX J0043-737: Not an 87 ms pulsar in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Maitra, C.; Ballet, J.; Esposito, P.; Haberl, F.; Tiengo, A.; Filipović, M. D.; Acero, F.

    2018-05-01

    Aims: AX J0043-737 is a source in the ASCA catalogue whose nature is uncertain. It is most commonly classified as a Crab-like pulsar in the Small Magellanic Cloud (SMC) following apparent detection of pulsations at 87 ms from a single ASCA observation. A follow-up ASCA observation was not able to confirm this, and the X-ray detection of the source has not been reported since. Methods: We studied the nature of the source with a dedicated XMM-Newton observation. We ascertained the source position, searched for the most probable counterpart, and studied the X-ray spectrum. We also analysed other archival observations with the source in the field of view to study its long-term variability. Results: With the good position localisation capability of XMM-Newton, we identify the counterpart of the source as MQS J004241.66-734041.3, an active galactic nucleus (AGN) behind the SMC at a redshift of 0.95. The X-ray spectrum can be fitted with an absorbed power law with a photon-index of Γ = 1.7, which is consistent with that expected from AGNs. By comparing the current XMM-Newton observation with an archival XMM-Newton and two other ASCA observations of the source, we find signatures of long-term variability, another common phenomenon in AGNs. All of the above are consistent with AX J0043-737 being an AGN behind the SMC.

  8. Novel techniques for characterization of hydrocarbon emission sources in the Barnett Shale

    NASA Astrophysics Data System (ADS)

    Nathan, Brian Joseph

    Changes in ambient atmospheric hydrocarbon concentrations can have both short-term and long-term effects on the atmosphere and on human health. Thus, accurate characterization of emissions sources is critically important. The recent boom in shale gas production has led to an increase in hydrocarbon emissions from associated processes, though the exact extent is uncertain. As an original quantification technique, a model airplane equipped with a specially-designed, open-path methane sensor was flown multiple times over a natural gas compressor station in the Barnett Shale in October 2013. A linear optimization was introduced to a standard Gaussian plume model in an effort to determine the most probable emission rate coming from the station. This is shown to be a suitable approach given an ideal source with a single, central plume. Separately, an analysis was performed to characterize the nonmethane hydrocarbons in the Barnett during the same period. Starting with ambient hourly concentration measurements of forty-six hydrocarbon species, Lagrangian air parcel trajectories were implemented in a meteorological model to extend the resolution of these measurements and achieve domain-fillings of the region for the period of interest. A self-organizing map (a type of unsupervised classification) was then utilized to reduce the dimensionality of the total multivariate set of grids into characteristic one-dimensional signatures. By also introducing a self-organizing map classification of the contemporary wind measurements, the spatial hydrocarbon characterizations are analyzed for periods with similar wind conditions. The accuracy of the classification is verified through assessment of observed spatial mixing ratio enhancements of key species, through site-comparisons with a related long-term study, and through a random forest analysis (an ensemble learning method of supervised classification) to determine the most important species for defining key classes. The hydrocarbon classification is shown to have performed very well in identifying expected signatures near and downwind-of oil and gas facilities with active permits, which showcases this method's usefulness for future regional hydrocarbon source-apportionment analyses.

  9. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    PubMed Central

    Lee, Jaebeom; Lee, Young-Joo

    2018-01-01

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421

  10. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    PubMed

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  11. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  12. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  13. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain

    NASA Astrophysics Data System (ADS)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction.

  14. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain.

    PubMed

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C M; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction. Copyright © 2017. Published by Elsevier Inc.

  15. Short-term microbial release during rain events from on-site sewers and cattle in a surface water source.

    PubMed

    Aström, Johan; Pettersson, Thomas J R; Reischer, Georg H; Hermansson, Malte

    2013-09-01

    The protection of drinking water from pathogens such as Cryptosporidium and Giardia requires an understanding of the short-term microbial release from faecal contamination sources in the catchment. Flow-weighted samples were collected during two rainfall events in a stream draining an area with on-site sewers and during two rainfall events in surface runoff from a bovine cattle pasture. Samples were analysed for human (BacH) and ruminant (BacR) Bacteroidales genetic markers through quantitative polymerase chain reaction (qPCR) and for sorbitol-fermenting bifidobacteria through culturing as a complement to traditional faecal indicator bacteria, somatic coliphages and the parasitic protozoa Cryptosporidium spp. and Giardia spp. analysed by standard methods. Significant positive correlations were observed between BacH, Escherichia coli, intestinal enterococci, sulphite-reducing Clostridia, turbidity, conductivity and UV254 in the stream contaminated by on-site sewers. For the cattle pasture, no correlation was found between any of the genetic markers and the other parameters. Although parasitic protozoa were not detected, the analysis for genetic markers provided baseline data on the short-term faecal contamination due to these potential sources of parasites. Background levels of BacH and BacR makers in soil emphasise the need to including soil reference samples in qPCR-based analyses for Bacteroidales genetic markers.

  16. Diabetic Macular Edema: What is Focal and What is Diffuse?

    PubMed Central

    Browning, David J.; Altaweel, Michael M.; Bressler, Neil M.; Bressler, Susan B.; Scott, Ingrid U.

    2009-01-01

    Purpose To review the available information on classification of diabetic macular edema (DME) as focal or diffuse. Design Interpretive essay. Methods Literature review and interpretation. Results The terms focal and diffuse diabetic macular edema are frequently used without clear definitions. Published definitions often use different examination modalities and are often inconsistent. Evaluating published information on prevalence of focal and diffuse DME, response of focal and diffuse DME to treatments, and importance of focal and diffuse DME in assessing prognosis is hindered because the terms are inconsistently employed. A newer vocabulary may be more constructive, one that describes discrete components of the concepts such as extent and location of macular thickening, involvement of the center of the macula, quantity and pattern of lipid exudates, source of fluorescein leakage, and regional variation in macular thickening, and that distinguishes these terms from the use of the term focal when describing one type of photocoagulation technique. Developing methods for assessing component variables that can be used in clinical practice and establishing reproducibility of the methods will be important tasks. Conclusion Little evidence exists that characteristics of DME described by the terms focal and diffuse help to explain variation in visual acuity or response to treatment. It is unresolved whether a concept of focal and diffuse DME will prove clinically useful despite frequent usage of the terms when describing management of DME. Further studies to address the issues are needed. PMID:18774122

  17. Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys

    PubMed Central

    DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-01-01

    ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598

  18. MIXOPTIM: A tool for the evaluation and the optimization of the electricity mix in a territory

    NASA Astrophysics Data System (ADS)

    Bonin, Bernard; Safa, Henri; Laureau, Axel; Merle-Lucotte, Elsa; Miss, Joachim; Richet, Yann

    2014-09-01

    This article presents a method of calculation of the generation cost of a mixture of electricity sources, by means of a Monte Carlo simulation of the production output taking into account the fluctuations of the demand and the stochastic nature of the availability of the various power sources that compose the mix. This evaluation shows that for a given electricity mix, the cost has a non-linear dependence on the demand level. In the second part of the paper, we develop some considerations on the management of intermittence. We develop a method based on spectral decomposition of the imposed power fluctuations to calculate the minimal amount of the controlled power sources needed to follow these fluctuations. This can be converted into a viability criterion of the mix included in the MIXOPTIM software. In the third part of the paper, the MIXOPTIM cost evaluation method is applied to the multi-criteria optimization of the mix, according to three main criteria: the cost of the mix; its impact on climate in terms of CO2 production; and the security of supply.

  19. C-arm based cone-beam CT using a two-concentric-arc source trajectory: system evaluation

    NASA Astrophysics Data System (ADS)

    Zambelli, Joseph; Zhuang, Tingliang; Nett, Brian E.; Riddell, Cyril; Belanger, Barry; Chen, Guang-Hong

    2008-03-01

    The current x-ray source trajectory for C-arm based cone-beam CT is a single arc. Reconstruction from data acquired with this trajectory yields cone-beam artifacts for regions other than the central slice. In this work we present the preliminary evaluation of reconstruction from a source trajectory of two concentric arcs using a flat-panel detector equipped C-arm gantry (GE Healthcare Innova 4100 system, Waukesha, Wisconsin). The reconstruction method employed is a summation of FDK-type reconstructions from the two individual arcs. For the angle between arcs studied here, 30°, this method offers a significant reduction in the visibility of cone-beam artifacts, with the additional advantages of simplicity and ease of implementation due to the fact that it is a direct extension of the reconstruction method currently implemented on commercial systems. Reconstructed images from data acquired from the two arc trajectory are compared to those reconstructed from a single arc trajectory and evaluated in terms of spatial resolution, low contrast resolution, noise, and artifact level.

  20. C-arm based cone-beam CT using a two-concentric-arc source trajectory: system evaluation.

    PubMed

    Zambelli, Joseph; Zhuang, Tingliang; Nett, Brian E; Riddell, Cyril; Belanger, Barry; Chen, Guang-Hong

    2008-01-01

    The current x-ray source trajectory for C-arm based cone-beam CT is a single arc. Reconstruction from data acquired with this trajectory yields cone-beam artifacts for regions other than the central slice. In this work we present the preliminary evaluation of reconstruction from a source trajectory of two concentric arcs using a flat-panel detector equipped C-arm gantry (GE Healthcare Innova 4100 system, Waukesha, Wisconsin). The reconstruction method employed is a summation of FDK-type reconstructions from the two individual arcs. For the angle between arcs studied here, 30°, this method offers a significant reduction in the visibility of cone-beam artifacts, with the additional advantages of simplicity and ease of implementation due to the fact that it is a direct extension of the reconstruction method currently implemented on commercial systems. Reconstructed images from data acquired from the two arc trajectory are compared to those reconstructed from a single arc trajectory and evaluated in terms of spatial resolution, low contrast resolution, noise, and artifact level.

  1. Yield Determination of Underground and Near Surface Explosions

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.

    2015-12-01

    As seismic coverage of the earth's surface continues to improve, we are faced with signals from a wide variety of explosions from various sources ranging from oil train and ordnance explosions to military and terrorist attacks, as well as underground nuclear tests. We present on a method for determining the yield of underground and near surface explosions, which should be applicable for many of these. We first review the regional envelope method that was developed for underground explosions (Pasyanos et al., 2012) and more recently modified for near surface explosions (Pasyanos and Ford, 2015). The technique models the waveform envelope templates as a product of source, propagation (geometrical spreading and attenuation), and site terms, while near surface explosions include an additional surface effect. Yields and depths are determined by comparing the observed envelopes to the templates and minimizing the misfit. We then apply the method to nuclear and chemical explosions for a range of yields, depths, and distances. We will review some results from previous work, and show new examples from ordnance explosions in Scandinavia, nuclear explosions in Eurasia, and chemical explosions in Nevada associated with the Source Physics Experiments (SPE).

  2. Well water quality in rural Nicaragua using a low-cost bacterial test and microbial source tracking.

    PubMed

    Weiss, Patricia; Aw, Tiong Gim; Urquhart, Gerald R; Galeano, Miguel Ruiz; Rose, Joan B

    2016-04-01

    Water-related diseases, particularly diarrhea, are major contributors to morbidity and mortality in developing countries. Monitoring water quality on a global scale is crucial to making progress in terms of population health. Traditional analytical methods are difficult to use in many regions of the world in low-resource settings that face severe water quality issues due to the inaccessibility of laboratories. This study aimed to evaluate a new low-cost method (the compartment bag test (CBT)) in rural Nicaragua. The CBT was used to quantify the presence of Escherichia coli in drinking water wells and aimed to determine the source(s) of any microbial contamination. Results indicate that the CBT is a viable method for use in remote rural regions. The overall quality of well water in Pueblo Nuevo, Nicaragua was deemed unsafe, and results led to the conclusion that animal fecal wastes may be one of the leading causes of well contamination. Elevation and depth of wells were not found to impact overall water quality. However rope-pump wells had a 64.1% reduction in contamination when compared with simple wells.

  3. The use of the virtual source technique in computing scattering from periodic ocean surfaces.

    PubMed

    Abawi, Ahmad T

    2011-08-01

    In this paper the virtual source technique is used to compute scattering of a plane wave from a periodic ocean surface. The virtual source technique is a method of imposing boundary conditions using virtual sources, with initially unknown complex amplitudes. These amplitudes are then determined by applying the boundary conditions. The fields due to these virtual sources are given by the environment Green's function. In principle, satisfying boundary conditions on an infinite surface requires an infinite number of sources. In this paper, the periodic nature of the surface is employed to populate a single period of the surface with virtual sources and m surface periods are added to obtain scattering from the entire surface. The use of an accelerated sum formula makes it possible to obtain a convergent sum with relatively small number of terms (∼40). The accuracy of the technique is verified by comparing its results with those obtained using the integral equation technique.

  4. A reconstruction method of intra-ventricular blood flow using color flow ultrasound: a simulation study

    NASA Astrophysics Data System (ADS)

    Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun

    2015-03-01

    A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.

  5. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  6. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  7. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  8. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  9. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  10. A mass spectrometry method for the determination of the species of origin of gelatine in foods and pharmaceutical products.

    PubMed

    Grundy, H H; Reece, P; Buckley, M; Solazzo, C M; Dowle, A A; Ashford, D; Charlton, A J; Wadsley, M K; Collins, M J

    2016-01-01

    Gelatine is a component of a wide range of foods. It is manufactured as a by-product of the meat industry from bone and hide, mainly from bovine and porcine sources. Accurate food labelling enables consumers to make informed decisions about the food they buy. Since labelling currently relies heavily on due diligence involving a paper trail, there could be benefits in developing a reliable test method for the consumer industries in terms of the species origin of gelatine. We present a method to determine the species origin of gelatines by peptide mass spectrometry methods. An evaluative comparison is also made with ELISA and PCR technologies. Commercial gelatines were found to contain undeclared species. Furthermore, undeclared bovine peptides were observed in commercial injection matrices. This analytical method could therefore support the food industry in terms of determining the species authenticity of gelatine in foods. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  11. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  12. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  13. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    PubMed

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-11-06

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.

  14. Unsupervised Method for Automatic Construction of a Disease Dictionary from a Large Free Text Collection

    PubMed Central

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-01-01

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting contextual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35–88%) over available, manually created disease terminologies. PMID:18999169

  15. Minimal-Drift Heading Measurement using a MEMS Gyro for Indoor Mobile Robots.

    PubMed

    Hong, Sung Kyung; Park, Sungsu

    2008-11-17

    To meet the challenges of making low-cost MEMS yaw rate gyros for the precise self-localization of indoor mobile robots, this paper examines a practical and effective method of minimizing drift on the heading angle that relies solely on integration of rate signals from a gyro. The main idea of the proposed approach is consists of two parts; 1) self-identification of calibration coefficients that affects long-term performance, and 2) threshold filter to reject the broadband noise component that affects short-term performance. Experimental results with the proposed phased method applied to Epson XV3500 gyro demonstrate that it effectively yields minimal drift heading angle measurements getting over major error sources in the MEMS gyro output.

  16. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  17. A solution to the water resources crisis in wetlands: development of a scenario-based modeling approach with uncertain features.

    PubMed

    Lv, Ying; Huang, Guohe; Sun, Wei

    2013-01-01

    A scenario-based interval two-phase fuzzy programming (SITF) method was developed for water resources planning in a wetland ecosystem. The SITF approach incorporates two-phase fuzzy programming, interval mathematical programming, and scenario analysis within a general framework. It can tackle fuzzy and interval uncertainties in terms of cost coefficients, resources availabilities, water demands, hydrological conditions and other parameters within a multi-source supply and multi-sector consumption context. The SITF method has the advantage in effectively improving the membership degrees of the system objective and all fuzzy constraints, so that both higher satisfactory grade of the objective and more efficient utilization of system resources can be guaranteed. Under the systematic consideration of water demands by the ecosystem, the SITF method was successfully applied to Baiyangdian Lake, which is the largest wetland in North China. Multi-source supplies (including the inter-basin water sources of Yuecheng Reservoir and Yellow River), and multiple water users (including agricultural, industrial and domestic sectors) were taken into account. The results indicated that, the SITF approach would generate useful solutions to identify long-term water allocation and transfer schemes under multiple economic, environmental, ecological, and system-security targets. It can address a comparative analysis for the system satisfactory degrees of decisions under various policy scenarios. Moreover, it is of significance to quantify the relationship between hydrological change and human activities, such that a scheme on ecologically sustainable water supply to Baiyangdian Lake can be achieved. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  19. A real-time laser feedback control method for the three-wave laser source used in the polarimeter-interferometer diagnostic on Joint-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Xiong, C. Y.; Chen, J.; Li, Q.; Liu, Y.; Gao, L.

    2014-12-01

    A three-wave laser polarimeter-interferometer, equipped with three independent far-infrared laser sources, has been developed on Joint-TEXT (J-TEXT) tokamak. The diagnostic system is capable of high-resolution temporal and phase measurement of the Faraday angle and line-integrated density. However, for long-term operation (>10 min), the free-running lasers can lead to large drifts of the intermediate frequencies (˜100-˜500 kHz/10 min) and decay of laser power (˜10%-˜20%/10 min), which act to degrade diagnostic performance. In addition, these effects lead to increased maintenance cost and limit measurement applicability to long pulse/steady state experiments. To solve this problem, a real-time feedback control method of the laser source is proposed. By accurately controlling the length of each laser cavity, both the intermediate frequencies and laser power can be simultaneously controlled: the intermediate frequencies are controlled according to the pre-set values, while the laser powers are maintained at an optimal level. Based on this approach, a real-time feedback control system has been developed and applied on J-TEXT polarimeter-interferometer. Long-term (theoretically no time limit) feedback of intermediate frequencies (maximum change less than ±12 kHz) and laser powers (maximum relative power change less than ±7%) has been successfully achieved.

  20. A real-time laser feedback control method for the three-wave laser source used in the polarimeter-interferometer diagnostic on Joint-TEXT tokamak.

    PubMed

    Xiong, C Y; Chen, J; Li, Q; Liu, Y; Gao, L

    2014-12-01

    A three-wave laser polarimeter-interferometer, equipped with three independent far-infrared laser sources, has been developed on Joint-TEXT (J-TEXT) tokamak. The diagnostic system is capable of high-resolution temporal and phase measurement of the Faraday angle and line-integrated density. However, for long-term operation (>10 min), the free-running lasers can lead to large drifts of the intermediate frequencies (∼100-∼500 kHz/10 min) and decay of laser power (∼10%-∼20%/10 min), which act to degrade diagnostic performance. In addition, these effects lead to increased maintenance cost and limit measurement applicability to long pulse/steady state experiments. To solve this problem, a real-time feedback control method of the laser source is proposed. By accurately controlling the length of each laser cavity, both the intermediate frequencies and laser power can be simultaneously controlled: the intermediate frequencies are controlled according to the pre-set values, while the laser powers are maintained at an optimal level. Based on this approach, a real-time feedback control system has been developed and applied on J-TEXT polarimeter-interferometer. Long-term (theoretically no time limit) feedback of intermediate frequencies (maximum change less than ±12 kHz) and laser powers (maximum relative power change less than ±7%) has been successfully achieved.

  1. Backward renormalization-group inference of cortical dipole sources and neural connectivity efficacy

    NASA Astrophysics Data System (ADS)

    Amaral, Selene da Rocha; Baccalá, Luiz A.; Barbosa, Leonardo S.; Caticha, Nestor

    2017-06-01

    Proper neural connectivity inference has become essential for understanding cognitive processes associated with human brain function. Its efficacy is often hampered by the curse of dimensionality. In the electroencephalogram case, which is a noninvasive electrophysiological monitoring technique to record electrical activity of the brain, a possible way around this is to replace multichannel electrode information with dipole reconstructed data. We use a method based on maximum entropy and the renormalization group to infer the position of the sources, whose success hinges on transmitting information from low- to high-resolution representations of the cortex. The performance of this method compares favorably to other available source inference algorithms, which are ranked here in terms of their performance with respect to directed connectivity inference by using artificially generated dynamic data. We examine some representative scenarios comprising different numbers of dynamically connected dipoles over distinct cortical surface positions and under different sensor noise impairment levels. The overall conclusion is that inverse problem solutions do not affect the correct inference of the direction of the flow of information as long as the equivalent dipole sources are correctly found.

  2. Moment tensor analysis of very shallow sources

    DOE PAGES

    Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.; ...

    2016-10-11

    An issue for moment tensor (MT) inversion of shallow seismic sources is that some components of the Green’s functions have vanishing amplitudes at the free surface, which can result in bias in the MT solution. The effects of the free surface on the stability of the MT method become important as we continue to investigate and improve the capabilities of regional full MT inversion for source–type identification and discrimination. It is important to understand free–surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have very shallow seismicity, such asmore » volcanic and geothermal systems. We examine the effects of the free surface on the MT via synthetic testing and apply the MT–based discrimination method to three quarry blasts from the HUMMING ALBATROSS experiment. These shallow chemical explosions at ~10 m depth and recorded up to several kilometers distance represent rather severe source–station geometry in terms of free–surface effects. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first–motion method enables the unique discrimination of these events. Furthermore, recovering the design yield using seismic moment estimates from MT inversion remains challenging, but we can begin to put error bounds on our moment estimates using the network sensitivity solution technique.« less

  3. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  4. Developing a comprehensive time series of GDP per capita for 210 countries from 1950 to 2015

    PubMed Central

    2012-01-01

    Background Income has been extensively studied and utilized as a determinant of health. There are several sources of income expressed as gross domestic product (GDP) per capita, but there are no time series that are complete for the years between 1950 and 2015 for the 210 countries for which data exist. It is in the interest of population health research to establish a global time series that is complete from 1950 to 2015. Methods We collected GDP per capita estimates expressed in either constant US dollar terms or international dollar terms (corrected for purchasing power parity) from seven sources. We applied several stages of models, including ordinary least-squares regressions and mixed effects models, to complete each of the seven source series from 1950 to 2015. The three US dollar and four international dollar series were each averaged to produce two new GDP per capita series. Results and discussion Nine complete series from 1950 to 2015 for 210 countries are available for use. These series can serve various analytical purposes and can illustrate myriad economic trends and features. The derivation of the two new series allows for researchers to avoid any series-specific biases that may exist. The modeling approach used is flexible and will allow for yearly updating as new estimates are produced by the source series. Conclusion GDP per capita is a necessary tool in population health research, and our development and implementation of a new method has allowed for the most comprehensive known time series to date. PMID:22846561

  5. Genomic comparison of multi-drug resistant invasive and colonizing Acinetobacter baumannii isolated from diverse human body sites reveals genomic plasticity.

    PubMed

    Sahl, Jason W; Johnson, J Kristie; Harris, Anthony D; Phillippy, Adam M; Hsiao, William W; Thom, Kerri A; Rasko, David A

    2011-06-04

    Acinetobacter baumannii has recently emerged as a significant global pathogen, with a surprisingly rapid acquisition of antibiotic resistance and spread within hospitals and health care institutions. This study examines the genomic content of three A. baumannii strains isolated from distinct body sites. Isolates from blood, peri-anal, and wound sources were examined in an attempt to identify genetic features that could be correlated to each isolation source. Pulsed-field gel electrophoresis, multi-locus sequence typing and antibiotic resistance profiles demonstrated genotypic and phenotypic variation. Each isolate was sequenced to high-quality draft status, which allowed for comparative genomic analyses with existing A. baumannii genomes. A high resolution, whole genome alignment method detailed the phylogenetic relationships of sequenced A. baumannii and found no correlation between phylogeny and body site of isolation. This method identified genomic regions unique to both those isolates found on the surface of the skin or in wounds, termed colonization isolates, and those identified from body fluids, termed invasive isolates; these regions may play a role in the pathogenesis and spread of this important pathogen. A PCR-based screen of 74 A. baumanii isolates demonstrated that these unique genes are not exclusive to either phenotype or isolation source; however, a conserved genomic region exclusive to all sequenced A. baumannii was identified and verified. The results of the comparative genome analysis and PCR assay show that A. baumannii is a diverse and genomically variable pathogen that appears to have the potential to cause a range of human disease regardless of the isolation source.

  6. Finite-amplitude, pulsed, ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Coulouvrat, François; Frøysa, Kjell-Eivind

    An analytical, approximate solution of the inviscid KZK equation for a nonlinear pulsed sound beam radiated by an acoustic source with a Gaussian velocity distribution, is obtained by means of the renormalization method. This method involves two steps. First, the transient, weakly nonlinear field is computed. However, because of cumulative nonlinear effects, that expansion is non-uniform and breaks down at some distance away from the source. So, in order to extend its validity, it is re-written in a new frame of co-ordinates, better suited to following the nonlinear distorsion of the wave profile. Basically, the nonlinear coordinate transform introduces additional terms in the expansion, which are chosen so as to counterbalance the non-uniform ones. Special care is devoted to the treatment of shock waves. Finally, comparisons with the results of a finite-difference scheme turn out favorable, and show the efficiency of the method for a rather large range of parameters.

  7. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  8. Origins of Contemporary Feminism: Source of Difficulty for the Equal Rights Amendment.

    ERIC Educational Resources Information Center

    Foss, Karen A.

    A survey of the methods of three feminist organizations offers general explanations for the failure of the Equal Rights Amendment (ERA). Limited to the emergence phase (1966-70) of the organizations, the survey examines the National Organization of Women (NOW), the Feminists, and the Women's Equity Action League (WEAL) in terms of their definition…

  9. Field measurements and modeling to resolve m2 to km2 CH4 emissions for a complex urban source: An Indiana landfill study

    USDA-ARS?s Scientific Manuscript database

    Large uncertainties for landfill CH4 emissions due to spatial and temporal variabilities remain unresolved by short-term field campaigns and historic GHG inventory models. Using four field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) ...

  10. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; May, Robert M.

    1990-04-01

    An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise.

  11. An Open Source Agenda for Research Linking Text and Image Content Features.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  12. Formaldehyde emission from particleboard and plywood paneling : measurement, mechanism, and product standards

    Treesearch

    George E. Myers

    1983-01-01

    A number of commercial panel products, primarily particleboard and hardwood plywood, were tested for their formaldehyde emission behavior using desiccator, perforator, and dynamic chamber methods. The results were analyzed in terms of the source of formaldehyde observed in the tests (free vs. hydrolytically produced) and the potential utility of the testa as product...

  13. A review of methods for predicting air pollution dispersion

    NASA Technical Reports Server (NTRS)

    Mathis, J. J., Jr.; Grose, W. L.

    1973-01-01

    Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.

  14. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  15. Biomarkers and isotopic fingerprinting to track sediment origin and connectivity at Baldegg Lake (Switzerland)

    NASA Astrophysics Data System (ADS)

    Lavrieux, Marlène; Meusburger, Katrin; Birkholz, Axel; Alewell, Christine

    2017-04-01

    Slope destabilization and associated sediment transfer are among the major causes of aquatic ecosystems and surface water quality impairment. Through land uses and agricultural practices, human activities modify the soil erosive risk and the catchment connectivity, becoming a key factor of sediment dynamics. Hence, restoration and management plans of water bodies can only be efficient if the sediment sources and the proportion attributable to different land uses and agricultural practices are identified. Several sediment fingerprinting methods, based on the geochemical (elemental composition), color, magnetic or isotopic (137Cs) sediment properties, are currently in use. However, these tools are not suitable for a land-use based fingerprinting. New organic geochemical approaches are now developed to discriminate source-soil contributions under different land-uses: The compound-specific stable isotopes (CSSI) technique, based on the biomarkers isotopic signature (here, fatty acids δ13C) variability within the plant species, The analysis of highly specific (i.e. source-family- or even source-species-specific) biomarkers assemblages, which use is until now mainly restricted to palaeoenvironmental reconstructions, and which offer also promising prospects for tracing current sediment origin. The approach was applied to reconstruct the spatio-temporal variability of the main sediment sources of Baldegg Lake (Lucern Canton, Switzerland), which suffers from a substantial eutrophication, despite several restoration attempts during the last 40 years. The sediment supplying areas and the exported volumes were identified using CSSI technique and highly specific biomarkers, coupled to a sediment connectivity model. The sediment origin variability was defined through the analysis of suspended river sediments sampled at high flow conditions (short term), and by the analysis of a lake sediment core covering the last 130 years (long term). The results show the utility of biomarkers and CSSI to track organic sources in contrasted land-use settings. Associated to other fingerprinting methods, this approach could in the future become a decision support tool for catchments management.

  16. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  17. Piecewise synonyms for enhanced UMLS source terminology integration.

    PubMed

    Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J

    2007-10-11

    The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.

  18. A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.

    NASA Astrophysics Data System (ADS)

    di Luccio, F.; Pino, A.; Thio, H.

    2002-12-01

    We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.

  19. Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.

    PubMed

    Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G

    2006-08-01

    Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.

  20. PCB remediation in schools: a review.

    PubMed

    Brown, Kathleen W; Minegishi, Taeko; Cummiskey, Cynthia Campisano; Fragala, Matt A; Hartman, Ross; MacIntosh, David L

    2016-02-01

    Growing awareness of polychlorinated biphenyls (PCBs) in legacy caulk and other construction materials of schools has created a need for information on best practices to control human exposures and comply with applicable regulations. A concise review of approaches and techniques for management of building-related PCBs is the focus of this paper. Engineering and administrative controls that block pathways of PCB transport, dilute concentrations of PCBs in indoor air or other exposure media, or establish uses of building space that mitigate exposure can be effective initial responses to identification of PCBs in a building. Mitigation measures also provide time for school officials to plan a longer-term remediation strategy and to secure the necessary resources. These longer-term strategies typically involve removal of caulk or other primary sources of PCBs as well as nearby masonry or other materials contaminated with PCBs by the primary sources. The costs of managing PCB-containing building materials from assessment through ultimate disposal can be substantial. Optimizing the efficacy and cost-effectiveness of remediation programs requires aligning a thorough understanding of sources and exposure pathways with the most appropriate mitigation and abatement methods.

  1. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less

  2. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  3. Bioassay selection, experimental design and quality control/assurance for use in effluent assessment and control.

    PubMed

    Johnson, Ian; Hutchings, Matt; Benstead, Rachel; Thain, John; Whitehouse, Paul

    2004-07-01

    In the UK Direct Toxicity Assessment Programme, carried out in 1998-2000, a series of internationally recognised short-term toxicity test methods for algae, invertebrates and fishes, and rapid methods (ECLOX and Microtox) were used extensively. Abbreviated versions of conventional tests (algal growth inhibition tests, Daphnia magna immobilisation test and the oyster embryo-larval development test) were valuable for toxicity screening of effluent discharges and the identification of causes and sources of toxicity. Rapid methods based on chemiluminescence and bioluminescence were not generally useful in this programme, but may have a role where the rapid test has been shown to be an acceptable surrogate for a standardised test method. A range of quality assurance and control measures were identified. Requirements for quality control/assurance are most stringent when deriving data for characterising the toxic hazards of effluents and monitoring compliance against a toxicity reduction target. Lower quality control/assurance requirements can be applied to discharge screening and the identification of causes and sources of toxicity.

  4. Low birth weight and air pollution in California: Which sources and components drive the risk?

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun

    2016-01-01

    Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Tsunami Source Identification on the 1867 Tsunami Event Based on the Impact Intensity

    NASA Astrophysics Data System (ADS)

    Wu, T. R.

    2014-12-01

    The 1867 Keelung tsunami event has drawn significant attention from people in Taiwan. Not only because the location was very close to the 3 nuclear power plants which are only about 20km away from the Taipei city but also because of the ambiguous on the tsunami sources. This event is unique in terms of many aspects. First, it was documented on many literatures with many languages and with similar descriptions. Second, the tsunami deposit was discovered recently. Based on the literatures, earthquake, 7-meter tsunami height, volcanic smoke, and oceanic smoke were observed. Previous studies concluded that this tsunami was generated by an earthquake with a magnitude around Mw7.0 along the Shanchiao Fault. However, numerical results showed that even a Mw 8.0 earthquake was not able to generate a 7-meter tsunami. Considering the steep bathymetry and intense volcanic activities along the Keelung coast, one reasonable hypothesis is that different types of tsunami sources were existed, such as the submarine landslide or volcanic eruption. In order to confirm this scenario, last year we proposed the Tsunami Reverse Tracing Method (TRTM) to find the possible locations of the tsunami sources. This method helped us ruling out the impossible far-field tsunami sources. However, the near-field sources are still remain unclear. This year, we further developed a new method named 'Impact Intensity Analysis' (IIA). In the IIA method, the study area is divided into a sequence of tsunami sources, and the numerical simulations of each source is conducted by COMCOT (Cornell Multi-grid Coupled Tsunami Model) tsunami model. After that, the resulting wave height from each source to the study site is collected and plotted. This method successfully helped us to identify the impact factor from the near-field potential sources. The IIA result (Fig. 1) shows that the 1867 tsunami event was a multi-source event. A mild tsunami was trigged by a Mw7.0 earthquake, and then followed by the submarine landslide or volcanic events. A near-field submarine landslide and landslide at Mien-Hwa Canyon were the most possible scenarios. As for the volcano scenarios, the volcanic eruption located about 10 km away from Keelung with 2.5x108 m3 disturbed water volume might be a candidate. The detailed scenario results will be presented in the full paper.

  6. Standardization of terminology in field of ionizing radiations and their measurements

    NASA Astrophysics Data System (ADS)

    Yudin, M. F.; Karaveyev, F. M.

    1984-03-01

    A new standard terminology was introduced on 1 January 1982 by the Scientific-Technical Commission on All-Union State Standards to cover ionizing radiations and their measurements. It is based on earlier standards such as GOST 15484-74/81, 18445-70/73, 19849-74, 22490-77 as well as the latest recommendations by international committees. One hundred eighty-six terms and definitions in 14 paragraphs are contained. Fundamental concepts, sources and forms of ionizing radiations, characteristics and parameters of ionizing radiations, and methods of measuring their characteristics and parameters are covered. New terms have been added to existing ones. The equivalent English, French, and German terms are also given. The terms measurement of ionizing radiation and transfer of ionizing particles (equivalent of particle fluence of energy fluence) are still under discussion.

  7. Assessing and measuring wetland hydrology

    USGS Publications Warehouse

    Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.

    2013-01-01

    Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.

  8. Two-relaxation-time lattice Boltzmann method for the anisotropic dispersive Henry problem

    NASA Astrophysics Data System (ADS)

    Servan-Camas, Borja; Tsai, Frank T.-C.

    2010-02-01

    This study develops a lattice Boltzmann method (LBM) with a two-relaxation-time collision operator (TRT) to cope with anisotropic heterogeneous hydraulic conductivity and anisotropic velocity-dependent hydrodynamic dispersion in the saltwater intrusion problem. The directional-speed-of-sound technique is further developed to address anisotropic hydraulic conductivity and dispersion tensors. Forcing terms are introduced in the LBM to correct numerical errors that arise during the recovery procedure and to describe the sink/source terms in the flow and transport equations. In order to facilitate the LBM implementation, the forcing terms are combined with the equilibrium distribution functions (EDFs) to create pseudo-EDFs. This study performs linear stability analysis and derives LBM stability domains to solve the anisotropic advection-dispersion equation. The stability domains are used to select the time step at which the lattice Boltzmann method provides stable solutions to the numerical examples. The LBM was implemented for the anisotropic dispersive Henry problem with high ratios of longitudinal to transverse dispersivities, and the results compared well to the solutions in the work of Abarca et al. (2007).

  9. Optically stimulated luminescence of borate glasses containing magnesia, quicklime, lithium and potassium carbonates

    NASA Astrophysics Data System (ADS)

    Valença, J. V. B.; Silveira, I. S.; Silva, A. C. A.; Dantas, N. O.; Antonio, P. L.; Caldas, L. V. E.; d'Errico, F.; Souza, S. O.

    2017-11-01

    The OSL characteristics of three different borate glass matrices containing magnesia (LMB), quicklime (LCB) or potassium carbonate (LKB) were examined. Five different formulations for each composition were produced using a melt-quenching method and analyzed in terms of both dose-response curves and OSL shape decay. The samples were irradiated using a 90Sr/90Y beta source with doses up to 30 Gy. Dose-response curves were plotted using the initial OSL intensity as the chosen parameter. The OSL analysis showed that LKB glasses are the most sensitive to beta irradiation. For the most sensitive LKB composition, the irradiation process was also done using a 60Co gamma source in a dose range from 200 to 800 Gy. In all cases, no saturation was observed. A fitting process using a three-term exponential function was performed for the most sensitive formulations of each composition, which suggested a similar behavior in the OSL decay.

  10. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  11. A discontinuous Galerkin approach for conservative modeling of fully nonlinear and weakly dispersive wave transformations

    NASA Astrophysics Data System (ADS)

    Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef

    2018-05-01

    This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.

  12. Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.

    1999-01-01

    The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.

  13. Ultra-Sensitive Elemental Analysis Using Plasmas 4.Application of Inductively Coupled Plasma Mass Spectrometry to the Study of Environmental Radioactivity

    NASA Astrophysics Data System (ADS)

    Yoshida, Satoshi

    Applications of inductively coupled plasma mass spectrometry (ICP-MS) to the determination of long-lived radionuclides in environmental samples were summarized. In order to predict the long-term behavior of the radionuclides, related stable elements were also determined. Compared with radioactivity measurements, the ICP-MS method has advantages in terms of its simple analytical procedures, prompt measurement time, and capability of determining the isotope ratio such as240Pu/239Pu, which can not be separated by radiation. Concentration of U and Th in Japanese surface soils were determined in order to determine the background level of the natural radionuclides. The 235U/238U ratio was successfully used to detect the release of enriched U from reconversion facilities to the environment and to understand the source term. The 240Pu/239Pu ratios in environmental samples varied widely depending on the Pu sources. Applications of ICP-MS to the measurement of I and Tc isotopes were also described. The ratio between radiocesium and stable Cs is useful for judging the equilibrium of deposited radiocesium in a forest ecosystem.

  14. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  15. ON THE CONNECTION OF THE APPARENT PROPER MOTION AND THE VLBI STRUCTURE OF COMPACT RADIO SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moor, A.; Frey, S.; Lambert, S. B.

    2011-06-15

    Many of the compact extragalactic radio sources that are used as fiducial points to define the celestial reference frame are known to have proper motions detectable with long-term geodetic/astrometric very long baseline interferometry (VLBI) measurements. These changes can be as high as several hundred microarcseconds per year for certain objects. When imaged with VLBI at milliarcsecond (mas) angular resolution, these sources (radio-loud active galactic nuclei) typically show structures dominated by a compact, often unresolved 'core' and a one-sided 'jet'. The positional instability of compact radio sources is believed to be connected with changes in their brightness distribution structure. For themore » first time, we test this assumption in a statistical sense on a large sample rather than on only individual objects. We investigate a sample of 62 radio sources for which reliable long-term time series of astrometric positions as well as detailed 8 GHz VLBI brightness distribution models are available. We compare the characteristic direction of their extended jet structure and the direction of their apparent proper motion. We present our data and analysis method, and conclude that there is indeed a correlation between the two characteristic directions. However, there are cases where the {approx}1-10 mas scale VLBI jet directions are significantly misaligned with respect to the apparent proper motion direction.« less

  16. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources

    PubMed Central

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-01

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources. PMID:28067288

  17. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources

    NASA Astrophysics Data System (ADS)

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-01

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.

  18. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources.

    PubMed

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-09

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today's ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.

  19. Drinking water quality standards and standard tests: Worldwide. (Latest citations from the Food Science and Technology Abstracts database). Published Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-06-01

    The bibliography contains citations concerning standards and standard tests for water quality in drinking water sources, reservoirs, and distribution systems. Standards from domestic and international sources are presented. Glossaries and vocabularies that concern water quality analysis, testing, and evaluation are included. Standard test methods for individual elements, selected chemicals, sensory properties, radioactivity, and other chemical and physical properties are described. Discussions for proposed standards on new pollutant materials are briefly considered. (Contains a minimum of 203 citations and includes a subject term index and title list.)

  20. MICROBIAL LABORATORY GUIDANCE MANUAL FOR THE ...

    EPA Pesticide Factsheets

    The Long-Term 2 Enhanced Surface Water Treatment Rule Laboratory Instruction Manual will be a compilation of all information needed by laboratories and field personnel to collect, analyze, and report the microbiological data required under the rule. The manual will provide laboratories with a single source of information that currently is available from various sources including the latest versions of Methods 1622 and 1623, including all approved, equivalent modifications; the procedures for E.coli methods approved for use under the LT2ESWTR; lists of vendor sources; data recording forms; data reporting requirements; information on the Laboratory Quality Assurance Evaluation Program for the Analysis of Cryptosporidium in Water; and sample collection procedures. Although most of this information is available elsewhere, a single, comprehensive compendium containing this information is needed to aid utilities and laboratories performing the sampling and analysis activities required under the LT2 rule. This manual will serve as an instruction manual for laboratories to use when collecting data for Crypto, E. coli and turbidity.

  1. Null stream analysis of Pulsar Timing Array data: localisation of resolvable gravitational wave sources

    NASA Astrophysics Data System (ADS)

    Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto

    2018-04-01

    Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.

  2. The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding

    NASA Technical Reports Server (NTRS)

    Mgana, C. V. M.; Chang, I. D.

    1982-01-01

    The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.

  3. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  4. A rapid phospholipase A2 bioassay using 14C-oleate-labelled E. coli bacterias.

    PubMed

    Meyer, T; von Wichert, P; Weins, D

    1989-02-01

    Two methods of phospholipase A2 determination using 14C-labelled E. coli bacterias as substrate were compared. One method works with a filter membrane for separation of cleaved 14C-oleate from remaining phospholipids, the other uses the well-known thin-layer chromatography for lipid analysis. Some features of human serum phospholipase A2 regarding pH and Ca2+ dependency were investigated. Possible sources of errors were discussed. It was shown that either method can differentiate between normal and pathologically elevated phospholipase A2 levels, but that the filter method is superior in terms of sensitivity and workload.

  5. Single Crystal Diffuse Neutron Scattering

    DOE PAGES

    Welberry, Richard; Whitfield, Ross

    2018-01-11

    Diffuse neutron scattering has become a valuable tool for investigating local structure in materials ranging from organic molecular crystals containing only light atoms to piezo-ceramics that frequently contain heavy elements. Although neutron sources will never be able to compete with X-rays in terms of the available flux the special properties of neutrons, viz. the ability to explore inelastic scattering events, the fact that scattering lengths do not vary systematically with atomic number and their ability to scatter from magnetic moments, provides strong motivation for developing neutron diffuse scattering methods. Here, we compare three different instruments that have been used bymore » us to collect neutron diffuse scattering data. Two of these are on a spallation source and one on a reactor source.« less

  6. Single Crystal Diffuse Neutron Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welberry, Richard; Whitfield, Ross

    Diffuse neutron scattering has become a valuable tool for investigating local structure in materials ranging from organic molecular crystals containing only light atoms to piezo-ceramics that frequently contain heavy elements. Although neutron sources will never be able to compete with X-rays in terms of the available flux the special properties of neutrons, viz. the ability to explore inelastic scattering events, the fact that scattering lengths do not vary systematically with atomic number and their ability to scatter from magnetic moments, provides strong motivation for developing neutron diffuse scattering methods. Here, we compare three different instruments that have been used bymore » us to collect neutron diffuse scattering data. Two of these are on a spallation source and one on a reactor source.« less

  7. Calibration of Photon Sources for Brachytherapy

    NASA Astrophysics Data System (ADS)

    Rijnders, Alex

    Source calibration has to be considered an essential part of the quality assurance program in a brachytherapy department. Not only it will ensure that the source strength value used for dose calculation agrees within some predetermined limits to the value stated on the source certificate, but also it will ensure traceability to international standards. At present calibration is most often still given in terms of reference air kerma rate, although calibration in terms of absorbed dose to water would be closer to the users interest. It can be expected that in a near future several standard laboratories will be able to offer this latter service, and dosimetry protocols will have to be adapted in this way. In-air measurement using ionization chambers (e.g. a Baldwin—Farmer ionization chamber for 192Ir high dose rate HDR or pulsed dose rate PDR sources) is still considered the method of choice for high energy source calibration, but because of their ease of use and reliability well type chambers are becoming more popular and are nowadays often recommended as the standard equipment. For low energy sources well type chambers are in practice the only equipment available for calibration. Care should be taken that the chamber is calibrated at the standard laboratory for the same source type and model as used in the clinic, and using the same measurement conditions and setup. Several standard laboratories have difficulties to provide these calibration facilities, especially for the low energy seed sources (125I and 103Pd). Should a user not be able to obtain properly calibrated equipment to verify the brachytherapy sources used in his department, then at least for sources that are replaced on a regular basis, a consistency check program should be set up to ensure a minimal level of quality control before these sources are used for patient treatment.

  8. 3D synthetic aperture for controlled-source electromagnetics

    NASA Astrophysics Data System (ADS)

    Knaak, Allison

    Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets correctly, which allows use of the method in locations where the subsurface models are built from only estimates. In addition to the technical work in this thesis, I explore the interface between science, government, and society by examining the controversy over hydraulic fracturing and by suggesting a process to aid the debate and possibly other future controversies.

  9. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  10. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  11. Information sources in biomedical science and medical journalism: methodological approaches and assessment.

    PubMed

    Miranda, Giovanna F; Vercellesi, Luisa; Bruno, Flavia

    2004-09-01

    Throughout the world the public is showing increasing interest in medical and scientific subjects and journalists largely spread this information, with an important impact on knowledge and health. Clearly, therefore, the relationship between the journalist and his sources is delicate: freedom and independence of information depend on the independence and truthfulness of the sources. The new "precision journalism" holds that scientific methods should be applied to journalism, so authoritative sources are a common need for journalists and scientists. We therefore compared the individual classifications and methods of assessing of sources in biomedical science and medical journalism to try to extrapolate scientific methods of evaluation to journalism. In journalism and science terms used to classify sources of information show some similarities, but their meanings are different. In science primary and secondary classes of information, for instance, refer to the levels of processing, but in journalism to the official nature of the source itself. Scientists and journalists must both always consult as many sources as possible and check their authoritativeness, reliability, completeness, up-to-dateness and balance. In journalism, however, there are some important differences and limits: too many sources can sometimes diminish the quality of the information. The sources serve a first filter between the event and the journalist, who is not providing the reader with the fact, but with its projection. Journalists have time constraints and lack the objective criteria for searching, the specific background knowledge, and the expertise to fully assess sources. To assist in understanding the wealth of sources of information in journalism, we have prepared a checklist of items and questions. There are at least four fundamental points that a good journalist, like any scientist, should know: how to find the latest information (the sources), how to assess it (the quality and authoritativeness), how to analyse and filter it (selection), how to deal with too many sources of information, sometimes case biased by conflicting interests (balance). The journalist must, in addition, know how to translate it to render it accessible and useful to the general public (dissemination), and how to use it best.

  12. An Annotated Bibliography of Literature Showing the Importance of the Process of Writing in the Language Arts Curriculum.

    ERIC Educational Resources Information Center

    Hook, Julie C.

    This bibliography contains lengthy annotations of 29 sources which address the issues of prewriting; revision; the emotions involved in the writing process; different methods of writing instruction; and the relationships among reading, writing, and reasoning. Also included are a glossary of terms, a summary of the issues raised by the works cited,…

  13. The implementation of the new Kentucky nitrogen and phosphorus index to reduce agricultural nonpoint source pollution

    USDA-ARS?s Scientific Manuscript database

    A new study released in September 2011 by the USDA found that all of three best management practices (BMPs) for nitrogen in terms of application rate, time, and method, are done for only about a third of U.S. cropland (http://www.ers.usda.gov/Publications/ERR127/). Without BMPs, the potential for ni...

  14. Improvements to Passive Acoustic Tracking Methods for Marine Mammal Monitoring

    DTIC Science & Technology

    2016-05-02

    individual animals . 15. SUBJECT TERMS Marine mammal; Passive acoustic monitoring ; Localization; Tracking ; Multiple source ; Sparse array 16. SECURITY...al. 2004; Thode 2005; Nosal 2007] to localize animals in situations where straight-line propagation assumptions made by conventional marine mammal...Objective 1: Inveti for sound speed profiles. hydrophone position and hydrophone timing offset in addition to animal position Almost all marine mammal

  15. In-vivo characterization of 2D residence time maps in the left ventricle

    NASA Astrophysics Data System (ADS)

    Rossini, Lorenzo; Martinez-Legazpi, Pablo; Bermejo, Javier; Benito, Yolanda; Alhama, Marta; Yotti, Raquel; Perez Del Villar, Candelas; Gonzalez-Mansilla, Ana; Barrio, Alicia; Fernandez-Aviles, Francisco; Shadden, Shawn; Del Alamo, Juan Carlos

    2014-11-01

    Thrombus formation is a multifactorial process involving biology and hemodynamics. Blood stagnation and wall shear stress are linked to thrombus formation. The quantification of residence time of blood in the left ventricle (LV) is relevant for patients affected by ventricular contractility dysfunction. We use a continuum formulation to compute 2D blood residence time (TR) maps in the LV using in-vivo 2D velocity fields in the apical long axis plane obtained from Doppler-echocardiography images of healthy and dilated hearts. The TR maps are generated integrating in time an advection-diffusion equation of a passive scalar with a time-source term. This equation represents the Eulerian translation of DTR / D t = 1 and is solved numerically with a finite volume method on a Cartesian grid using an immersed boundary for the LV wall. Changing the source term and the boundary conditions allows us to track blood transport (direct and retained flow) in the LV and the topology of early (E) and atrial (A) filling waves. This method has been validated against a Lagrangian Coherent Structures analysis, is computationally inexpensive and observer independent, making it a potential diagnostic tool in clinical settings.

  16. Surveillance system for air pollutants by combination of the decision support system COMPAS and optical remote sensing systems

    NASA Astrophysics Data System (ADS)

    Flassak, Thomas; de Witt, Helmut; Hahnfeld, Peter; Knaup, Andreas; Kramer, Lothar

    1995-09-01

    COMPAS is a decision support system designed to assist in the assessment of the consequences of accidental releases of toxic and flammable substances. One of the key elements of COMPAS is a feedback algorithm which allows us to calculate the source term with the aid of concentration measurements. Up to now the feedback technique is applied to concentration measurements done with test tubes or conventional point sensors. In this paper the extension of the actual method is presented which is the combination of COMPAS and an optical remote sensing system like the KAYSER-THREDE K300 FTIR system. Active remote sensing methods based on FTIR are, among other applications, ideal for the so-called fence line monitoring of the diffuse emissions and accidental releases from industrial facilities, since from the FTIR spectra averaged concentration levels along the measurement path can be achieved. The line-averaged concentrations are ideally suited as on-line input for COMPAS' feedback technique. Uncertainties in the assessment of the source term related with both shortcomings of the dispersion model itself and also problems of a feedback strategy based on point measurements are reduced.

  17. On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method

    NASA Astrophysics Data System (ADS)

    Schmitz, F.; Fleck, B.

    1993-11-01

    The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.

  18. 26 CFR 1.737-1 - Recognition of precontribution gain.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Property A1 and Property A2 is long-term, U.S.-source capital gain or loss. The character of gain on Property A3 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real... long-term, U.S.-source capital gain ($10,000 gain on Property A1 and $8,000 loss on Property A2) and $1...

  19. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  20. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  1. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  2. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  3. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  4. Sensitivity of new detection method for ultra-low frequency gravitational waves with pulsar spin-down rate statistics

    NASA Astrophysics Data System (ADS)

    Yonemaru, Naoyuki; Kumamoto, Hiroki; Takahashi, Keitaro; Kuroyanagi, Sachiko

    2018-04-01

    A new detection method for ultra-low frequency gravitational waves (GWs) with a frequency much lower than the observational range of pulsar timing arrays (PTAs) was suggested in Yonemaru et al. (2016). In the PTA analysis, ultra-low frequency GWs (≲ 10-10 Hz) which evolve just linearly during the observation time span are absorbed by the pulsar spin-down rates since both have the same effect on the pulse arrival time. Therefore, such GWs cannot be detected by the conventional method of PTAs. However, the bias on the observed spin-down rates depends on relative direction of a pulsar and GW source and shows a quadrupole pattern in the sky. Thus, if we divide the pulsars according to the position in the sky and see the difference in the statistics of the spin-down rates, ultra-low frequency GWs from a single source can be detected. In this paper, we evaluate the potential of this method by Monte-Carlo simulations and estimate the sensitivity, considering only the "Earth term" while the "pulsar term" acts like random noise for GW frequencies 10-13 - 10-10 Hz. We find that with 3,000 milli-second pulsars, which are expected to be discovered by a future survey with the Square Kilometre Array, GWs with the derivative of amplitude of about 3 × 10^{-19} {s}^{-1} can in principle be detected. Implications for possible supermassive binary black holes in Sgr* and M87 are also given.

  5. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

  6. Harmonic source wavefront aberration correction for ultrasound imaging

    PubMed Central

    Dianis, Scott W.; von Ramm, Olaf T.

    2011-01-01

    A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031

  7. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  8. A novel method for assessing chronic cortisol concentrations in dogs using the nail as a source.

    PubMed

    Mack, Z; Fokidis, H B

    2017-04-01

    Cortisol, a glucocorticoid secreted in response to stress, is used to assess adrenal function and mental health in clinical settings. Current methods assess cortisol sources that reflect short-term secretion that can vary with current stress state. Here, we present a novel method for the extraction and quantification of cortisol from the dog nail using solid phase extraction coupled to enzyme-linked immunosorbent assay. Validation experiments demonstrated accuracy (r = 0.836, P < 0.001) precision (15.1% coefficients of variation), and repeatability (14.4% coefficients of variation) with this method. Furthermore, nail cortisol concentrations were positively correlated to an established hair cortisol method (r = 0.736, P < 0.001). Nail cortisol concentrations did not differ with dog sex, breed, age, or weights; however, sample size limitations may preclude statistical significance. Nail cortisol may provide information on cortisol secretion integrated over the time corresponding to nail growth and may be useful as a tool for diagnosing stress and adrenal disorders in dogs. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  10. A quality assurance program for clinical PDT

    NASA Astrophysics Data System (ADS)

    Dimofte, Andreea; Finlay, Jarod; Ong, Yi Hong; Zhu, Timothy C.

    2018-02-01

    Successful outcome of Photodynamic therapy (PDT) depends on accurate delivery of prescribed light dose. A quality assurance program is necessary to ensure that light dosimetry is correctly measured. We have instituted a QA program that include examination of long term calibration uncertainty of isotropic detectors for light fluence rate, power meter head intercomparison for laser power, stability of the light-emitting diode (LED) light source integrating sphere as a light fluence standard, laser output and calibration of in-vivo reflective fluorescence and absorption spectrometers. We examined the long term calibration uncertainty of isotropic detector sensitivity, defined as fluence rate per voltage. We calibrate the detector using the known calibrated light fluence rate of the LED light source built into an internally baffled 4" integrating sphere. LED light sources were examined using a 1mm diameter isotropic detector calibrated in a collimated beam. Wavelengths varying from 632nm to 690nm were used. The internal LED method gives an overall calibration accuracy of +/- 4%. Intercomparison among power meters was performed to determine the consistency of laser power and light fluence rate measured among different power meters. Power and fluence readings were measured and compared among detectors. A comparison of power and fluence reading among several power heads shows long term consistency for power and light fluence rate calibration to within 3% regardless of wavelength. The standard LED light source is used to calibrate the transmission difference between different channels for the diffuse reflective absorption and fluorescence contact probe as well as isotropic detectors used in PDT dose dosimeter.

  11. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  12. [Classification of Priority Area for Soil Environmental Protection Around Water Sources: Method Proposed and Case Demonstration].

    PubMed

    Li, Lei; Wang, Tie-yu; Wang, Xiaojun; Xiao, Rong-bo; Li, Qi-feng; Peng, Chi; Han, Cun-liang

    2016-04-15

    Based on comprehensive consideration of soil environmental quality, pollution status of river, environmental vulnerability and the stress of pollution sources, a technical method was established for classification of priority area of soil environmental protection around the river-style water sources. Shunde channel as an important drinking water sources of Foshan City, Guangdong province, was studied as a case, of which the classification evaluation system was set up. In detail, several evaluation factors were selected according to the local conditions of nature, society and economy, including the pollution degree of heavy metals in soil and sediment, soil characteristics, groundwater sensitivity, vegetation coverage, the type and location of pollution sources. Data information was mainly obtained by means of field survey, sampling analysis, and remote sensing interpretation. Afterwards, Analytical Hierarchy Process (AHP) was adopted to decide the weight of each factor. The basic spatial data layers were set up respectively and overlaid based on the weighted summation assessment model in Geographical Information System (GIS), resulting in a classification map of soil environmental protection level in priority area of Shunde channel. Accordingly, the area was classified to three levels named as polluted zone, risky zone and safe zone, which respectively accounted for 6.37%, 60.90% and 32.73% of the whole study area. Polluted zone and risky zone were mainly distributed in Lecong, Longjiang and Leliu towns, with pollutants mainly resulted from the long-term development of aquaculture and the industries containing furniture, plastic constructional materials and textile and clothing. In accordance with the main pollution sources of soil, targeted and differentiated strategies were put forward. The newly established evaluation method could be referenced for the protection and sustainable utilization of soil environment around the water sources.

  13. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  14. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  15. Comment on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations" by Xilin Xia et al.

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Mao, Bing; Dong, Bingjiang

    2018-01-01

    Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.

  16. Sources of method bias in social science research and recommendations on how to control it.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  17. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    PubMed

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  18. Comparison of ESI- and APCI-LC-MS/MS methods: A case study of levonorgestrel in human plasma.

    PubMed

    Wang, Rulin; Zhang, Lin; Zhang, Zunjian; Tian, Yuan

    2016-12-01

    Electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI) techniques for liquid chromatography-tandem mass spectrometry (LC-MS/MS) determination of levonorgestrel were evaluated. In consideration of difference in ionization mechanism, the two ionization sources were compared in terms of LC conditions, MS parameters and performance of method. The sensitivity for detection of levonorgestrel with ESI was 0.25 ng/mL which was lower than 1 ng/mL with APCI. Matrix effects were evaluated for levonorgestrel and canrenone (internal standard, IS) in human plasma, and the results showed that APCI source appeared to be slightly less liable to matrix effect than ESI source. With an overall consideration, ESI was chosen as a better ionization technique for rapid and sensitive quantification of levonorgestrel. The optimized LC-ESI-MS/MS method was validated for a linear range of 0.25-50 ng/mL with a correlation coefficient ≥0.99. The intra- and inter-batch precision and accuracy were within 11.72% and 6.58%, respectively. The application of this method was demonstrated by a bioequivalence study following a single oral administration of 1.5 mg levonorgestrel tablets in 21 Chinese healthy female volunteers.

  19. Improved tomographic reconstructions using adaptive time-dependent intensity normalization.

    PubMed

    Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui

    2010-09-01

    The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.

  20. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  1. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  2. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  3. The impact of the form of the Euler equations for radial flow in cylindrical and spherical coordinates on numerical conservation and accuracy

    NASA Astrophysics Data System (ADS)

    Crittenden, P. E.; Balachandar, S.

    2018-07-01

    The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+-up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.

  4. The impact of the form of the Euler equations for radial flow in cylindrical and spherical coordinates on numerical conservation and accuracy

    NASA Astrophysics Data System (ADS)

    Crittenden, P. E.; Balachandar, S.

    2018-03-01

    The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+ -up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.

  5. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie

    2018-01-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.

  6. In Situ NAPL Modification for Contaminant Source-Zone Passivation, Mass Flux Reduction, and Remediation

    NASA Astrophysics Data System (ADS)

    Mateas, D. J.; Tick, G.; Carroll, K. C.

    2016-12-01

    A remediation method was developed to reduce the aqueous solubility and mass-flux of target NAPL contaminants through the in-situ creation of a NAPL mixture source-zone. This method was tested in the laboratory using equilibrium batch tests and two-dimensional flow-cell experiments. The creation of two different NAPL mixture source zones were tested in which 1) volumes of relatively insoluble n-hexadecane (HEX) or vegetable oil (VO) were injected into a trichloroethene (TCE) contaminant source-zone; and 2) pre-determined HEX-TCE and VO-TCE mixture ratio source zones were emplaced into the flow cell prior to water flushing. NAPL-aqueous phase batch tests were conducted prior to the flow-cell experiments to evaluate the effects of various NAPL mixture ratios on equilibrium aqueous-phase concentrations of TCE and toluene (TOL) and to design optimal NAPL (HEX or VO) injection volumes for the flow-cell experiments. Uniform NAPL mixture source-zones were able to quickly decrease contaminant mass-flux, as demonstrated by the emplaced source-zone experiments. The success of the HEX and VO injections to also decrease mass flux was dependent on the ability of these injectants to homogeneously mix with TCE source-zone. Upon injection, both HEX and VO migrated away from the source-zone, to some extent. However, the lack of a steady-state dissolution phase and the inefficient mass-flux-reduction/mass-removal behavior produced after VO injection suggest that VO was more effective than HEX for mixing and partitioning within the source-zone region to form a more homogeneous NAPL mixture with TCE. VO appears to be a promising source-zone injectant-NAPL due to its negligible long-term toxicity and lower mobilization potential.

  7. Naturally occurring 32Si and low-background silicon dark matter detectors

    DOE PAGES

    Orrell, John L.; Arnquist, Isaac J.; Bliss, Mary; ...

    2018-02-10

    Here, the naturally occurring radioisotope 32Si represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of 32Si and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the 32Si concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon “ore” and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude thatmore » production of 32Si-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in 32Si. To quantitatively evaluate the 32Si content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon detectors with low levels of 32Si, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.« less

  8. Naturally occurring 32Si and low-background silicon dark matter detectors

    NASA Astrophysics Data System (ADS)

    Orrell, John L.; Arnquist, Isaac J.; Bliss, Mary; Bunker, Raymond; Finch, Zachary S.

    2018-05-01

    The naturally occurring radioisotope 32Si represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of 32Si and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the 32Si concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon "ore" and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude that production of 32Si-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in 32Si. To quantitatively evaluate the 32Si content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon detectors with low levels of 32Si, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.

  9. Naturally occurring 32Si and low-background silicon dark matter detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, John L.; Arnquist, Isaac J.; Bliss, Mary

    Here, the naturally occurring radioisotope 32Si represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of 32Si and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the 32Si concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon “ore” and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude thatmore » production of 32Si-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in 32Si. To quantitatively evaluate the 32Si content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon detectors with low levels of 32Si, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.« less

  10. Naturally occurring 32 Si and low-background silicon dark matter detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, John L.; Arnquist, Isaac J.; Bliss, Mary

    The naturally occurring radioisotope Si-32 represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of Si-32 and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the Si-32 concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon “ore” and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude that productionmore » of Si-32-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in Si-32. To quantitatively evaluate the Si-32 content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon-based detectors with low levels of Si-32, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.« less

  11. Source reconstruction via the spatiotemporal Kalman filter and LORETA from EEG time series with 32 or fewer electrodes.

    PubMed

    Hamid, Laith; Al Farawn, Ali; Merlet, Isabelle; Japaridze, Natia; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Wendling, Fabrice; Siniatchkin, Michael

    2017-07-01

    The clinical routine of non-invasive electroencephalography (EEG) is usually performed with 8-40 electrodes, especially in long-term monitoring, infants or emergency care. There is a need in clinical and scientific brain imaging to develop inverse solution methods that can reconstruct brain sources from these low-density EEG recordings. In this proof-of-principle paper we investigate the performance of the spatiotemporal Kalman filter (STKF) in EEG source reconstruction with 9-, 19- and 32- electrodes. We used simulated EEG data of epileptic spikes generated from lateral frontal and lateral temporal brain sources using state-of-the-art neuronal population models. For validation of source reconstruction, we compared STKF results to the location of the simulated source and to the results of low-resolution brain electromagnetic tomography (LORETA) standard inverse solution. STKF consistently showed less localization bias compared to LORETA, especially when the number of electrodes was decreased. The results encourage further research into the application of the STKF in source reconstruction of brain activity from low-density EEG recordings.

  12. Development of departmental standard for traceability of measured activity for I-131 therapy capsules used in nuclear medicine.

    PubMed

    Ravichandran, Ramamoorthy; Binukumar, Jp

    2011-01-01

    International Basic Safety Standards (International Atomic Energy Agency, IAEA) provide guidance levels for diagnostic procedures in nuclear medicine indicating the maximum usual activity for various diagnostic tests in terms of activities of injected radioactive formulations. An accuracy of ± 10% in the activities of administered radio-pharmaceuticals is being recommended, for expected outcome in diagnostic and therapeutic nuclear medicine procedures. It is recommended that the long-term stability of isotope calibrators used in nuclear medicine is to be checked periodically for their performance using a long-lived check source, such as Cs-137, of suitable activity. In view of the un-availability of such a radioactive source, we tried to develop methods to maintain traceability of these instruments, for certifying measured activities for human use. Two re-entrant chambers [(HDR 1000 and Selectron Source Dosimetry System (SSDS)] with I-125 and Ir-192 calibration factors in the Department of Radiotherapy were used to measure Iodine-131 (I-131) therapy capsules to establish traceability to Mark V isotope calibrator of the Department of Nuclear Medicine. Special nylon jigs were fabricated to keep I-131 capsule holder in position. Measured activities in all the chambers showed good agreement. The accuracy of SSDS chamber in measuring Ir-192 activities in the last 5 years was within 0.5%, validating its role as departmental standard for measuring activity. The above method is adopted because mean energies of I-131 and Ir-192 are comparable.

  13. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  14. Capture Versus Capture Zones: Clarifying Terminology Related to Sources of Water to Wells.

    PubMed

    Barlow, Paul M; Leake, Stanley A; Fienen, Michael N

    2018-03-15

    The term capture, related to the source of water derived from wells, has been used in two distinct yet related contexts by the hydrologic community. The first is a water-budget context, in which capture refers to decreases in the rates of groundwater outflow and (or) increases in the rates of recharge along head-dependent boundaries of an aquifer in response to pumping. The second is a transport context, in which capture zone refers to the specific flowpaths that define the three-dimensional, volumetric portion of a groundwater flow field that discharges to a well. A closely related issue that has become associated with the source of water to wells is streamflow depletion, which refers to the reduction in streamflow caused by pumping, and is a type of capture. Rates of capture and streamflow depletion are calculated by use of water-budget analyses, most often with groundwater-flow models. Transport models, particularly particle-tracking methods, are used to determine capture zones to wells. In general, however, transport methods are not useful for quantifying actual or potential streamflow depletion or other types of capture along aquifer boundaries. To clarify the sometimes subtle differences among these terms, we describe the processes and relations among capture, capture zones, and streamflow depletion, and provide proposed terminology to distinguish among them. Published 2018. This article is a U.S. Government work and is in the public domain in the USA. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  15. Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska

    USGS Publications Warehouse

    Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.

    2016-01-01

    Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.

  16. Improving MEG source localizations: an automated method for complete artifact removal based on independent component analysis.

    PubMed

    Mantini, D; Franciotti, R; Romani, G L; Pizzella, V

    2008-03-01

    The major limitation for the acquisition of high-quality magnetoencephalography (MEG) recordings is the presence of disturbances of physiological and technical origins: eye movements, cardiac signals, muscular contractions, and environmental noise are serious problems for MEG signal analysis. In the last years, multi-channel MEG systems have undergone rapid technological developments in terms of noise reduction, and many processing methods have been proposed for artifact rejection. Independent component analysis (ICA) has already shown to be an effective and generally applicable technique for concurrently removing artifacts and noise from the MEG recordings. However, no standardized automated system based on ICA has become available so far, because of the intrinsic difficulty in the reliable categorization of the source signals obtained with this technique. In this work, approximate entropy (ApEn), a measure of data regularity, is successfully used for the classification of the signals produced by ICA, allowing for an automated artifact rejection. The proposed method has been tested using MEG data sets collected during somatosensory, auditory and visual stimulation. It was demonstrated to be effective in attenuating both biological artifacts and environmental noise, in order to reconstruct clear signals that can be used for improving brain source localizations.

  17. Mapping underwater sound noise and assessing its sources by using a self-organizing maps method.

    PubMed

    Rako, Nikolina; Vilibić, Ivica; Mihanović, Hrvoje

    2013-03-01

    This study aims to provide an objective mapping of the underwater noise and its sources over an Adriatic coastal marine habitat by applying the self-organizing maps (SOM) method. Systematic sampling of sea ambient noise (SAN) was carried out at ten predefined acoustic stations between 2007 and 2009. Analyses of noise levels were performed for 1/3 octave band standard centered frequencies in terms of instantaneous sound pressure levels averaged over 300 s to calculate the equivalent continuous sound pressure levels. Data on vessels' presence, type, and distance from the monitoring stations were also collected at each acoustic station during the acoustic sampling. Altogether 69 noise surveys were introduced to the SOM predefined 2 × 2 array. The overall results of the analysis distinguished two dominant underwater soundscapes, associating them mainly to the seasonal changes in the nautical tourism and fishing activities within the study area and to the wind and wave action. The analysis identified recreational vessels as the dominant anthropogenic source of underwater noise, particularly during the tourist season. The method demonstrated to be an efficient tool in predicting the SAN levels based on the vessel distribution, indicating also the possibility of its wider implication for marine conservation.

  18. Seismoelectric imaging of shallow targets

    USGS Publications Warehouse

    Haines, S.S.; Pride, S.R.; Klemperer, S.L.; Biondi, B.

    2007-01-01

    We have undertaken a series of controlled field experiments to develop seismoelectric experimental methods for near-surface applications and to improve our understanding of seismoelectric phenomena. In a set of off-line geometry surveys (source separated from the receiver line), we place seismic sources and electrode array receivers on opposite sides of a man-made target (two sand-filled trenches) to record separately two previously documented seismoelectric modes: (1) the electromagnetic interface response signal created at the target and (2) the coseismic electric fields located within a compressional seismic wave. With the seismic source point in the center of a linear electrode array, we identify the previously undocumented seismoelectric direct field, and the Lorentz field of the metal hammer plate moving in the earth's magnetic field. We place the seismic source in the center of a circular array of electrodes (radial and circumferential orientations) to analyze the source-related direct and Lorentz fields and to establish that these fields can be understood in terms of simple analytical models. Using an off-line geometry, we create a multifold, 2D image of our trenches as dipping layers, and we also produce a complementary synthetic image through numerical modeling. These images demonstrate that off-line geometry (e.g., crosswell) surveys offer a particularly promising application of the seismoelectric method because they effectively separate the interface response signal from the (generally much stronger) coseismic and source-related fields. ?? 2007 Society of Exploration Geophysicists.

  19. The influence of cross-order terms in interface mobilities for structure-borne sound source characterization

    NASA Astrophysics Data System (ADS)

    Bonhoff, H. A.; Petersson, B. A. T.

    2010-08-01

    For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.

  20. A comparison of high-resolution specific conductance-based end-member mixing analysis and a graphical method for baseflow separation of four streams in hydrologically challenging agricultural watersheds

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2015-01-01

    Quantifying the relative contributions of different sources of water to a stream hydrograph is important for understanding the hydrology and water quality dynamics of a given watershed. To compare the performance of two methods of hydrograph separation, a graphical program [baseflow index (BFI)] and an end-member mixing analysis that used high-resolution specific conductance measurements (SC-EMMA) were used to estimate daily and average long-term slowflow additions of water to four small, primarily agricultural streams with different dominant sources of water (natural groundwater, overland flow, subsurface drain outflow, and groundwater from irrigation). Because the result of hydrograph separation by SC-EMMA is strongly related to the choice of slowflow and fastflow end-member values, a sensitivity analysis was conducted based on the various approaches reported in the literature to inform the selection of end-members. There were substantial discrepancies among the BFI and SC-EMMA, and neither method produced reasonable results for all four streams. Streams that had a small difference in the SC of slowflow compared with fastflow or did not have a monotonic relationship between streamflow and stream SC posed a challenge to the SC-EMMA method. The utility of the graphical BFI program was limited in the stream that had only gradual changes in streamflow. The results of this comparison suggest that the two methods may be quantifying different sources of water. Even though both methods are easy to apply, they should be applied with consideration of the streamflow and/or SC characteristics of a stream, especially where anthropogenic water sources (irrigation and subsurface drainage) are present.

  1. Internet search and krokodil in the Russian Federation: an infoveillance study.

    PubMed

    Zheluk, Andrey; Quinn, Casey; Meylakhs, Peter

    2014-09-18

    Krokodil is an informal term for a cheap injectable illicit drug domestically prepared from codeine-containing medication (CCM). The method of krokodil preparation may produce desomorphine as well as toxic reactants that cause extensive tissue necrosis. The first confirmed report of krokodil use in Russia took place in 2004. In 2012, reports of krokodil-related injection injuries began to appear beyond Russia in Western Europe and the United States. This exploratory study had two main objectives: (1) to determine if Internet search patterns could detect regularities in behavioral responses to Russian CCM policy at the population level, and (2) to determine if complementary data sources could explain the regularities we observed. First, we obtained krokodil-related search pattern data for each Russia subregion (oblast) between 2011 and 2012. Second, we analyzed several complementary data sources included krokodil-related court cases, and related search terms on both Google and Yandex to evaluate the characteristics of terms accompanying krokodil-related search queries. In the 6 months preceding CCM sales restrictions, 21 of Russia's 83 oblasts had search rates higher than the national average (mean) of 16.67 searches per 100,000 population for terms associated with krokodil. In the 6 months following restrictions, mean national searches dropped to 9.65 per 100,000. Further, the number of oblasts recording a higher than average search rate dropped from 30 to 16. Second, we found krokodil-related court appearances were moderately positively correlated (Spearman correlation=.506, P≤.001) with behaviors consistent with an interest in the production and use of krokodil across Russia. Finally, Google Trends and Google and Yandex related terms suggested consistent public interest in the production and use of krokodil as well as for CCM as analgesic medication during the date range covered by this study. Illicit drug use data are generally regarded as difficult to obtain through traditional survey methods. Our analysis suggests it is plausible that Yandex search behavior served as a proxy for patterns of krokodil production and use during the date range we investigated. More generally, this study demonstrates the application of novel methods recently used by policy makers to both monitor illicit drug use and influence drug policy decision making.

  2. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  3. Theoretical and Experimental Aspects of Acoustic Modelling of Engine Exhaust Systems with Applications to a Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Sridhara, Basavapatna Sitaramaiah

    In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.

  4. PM2.5 Characterization for Time Series Studies: Organic Molecular Marker Speciation Methods and Observations from Daily Measurements in Denver

    PubMed Central

    Dutton, Steven J.; Williams, Daniel E.; Garcia, Jessica K.; Vedal, Sverre; Hannigan, Michael P.

    2009-01-01

    Particulate matter less than 2.5 microns in diameter (PM2.5) has been shown to have a wide range of adverse health effects and consequently is regulated in accordance with the US-EPA’s National Ambient Air Quality Standards. PM2.5 originates from multiple primary sources and is also formed through secondary processes in the atmosphere. It is plausible that some sources form PM2.5 that is more toxic than PM2.5 from other sources. Identifying the responsible sources could provide insight into the biological mechanisms causing the observed health effects and provide a more efficient approach to regulation. This is the goal of the Denver Aerosol Sources and Health (DASH) study, a multi-year PM2.5 source apportionment and health study. The first step in apportioning the PM2.5 to different sources is to determine the chemical make-up of the PM2.5. This paper presents the methodology used during the DASH study for organic speciation of PM2.5. Specifically, methods are covered for solvent extraction of non-polar and semi-polar organic molecular markers using gas chromatography-mass spectrometry (GC-MS). Vast reductions in detection limits were obtained through the use of a programmable temperature vaporization (PTV) inlet along with other method improvements. Results are presented for the first 1.5 years of the DASH study revealing seasonal and source-related patterns in the molecular markers and their long-term correlation structure. Preliminary analysis suggests that point sources are not a significant contributor to the organic molecular markers measured at our receptor site. Several motor vehicle emission markers help identify a gasoline/diesel split in the ambient data. Findings show both similarities and differences when compared with other cities where similar measurements and assessments have been made. PMID:20161318

  5. Internet Search and Krokodil in the Russian Federation: An Infoveillance Study

    PubMed Central

    2014-01-01

    Background Krokodil is an informal term for a cheap injectable illicit drug domestically prepared from codeine-containing medication (CCM). The method of krokodil preparation may produce desomorphine as well as toxic reactants that cause extensive tissue necrosis. The first confirmed report of krokodil use in Russia took place in 2004. In 2012, reports of krokodil-related injection injuries began to appear beyond Russia in Western Europe and the United States. Objective This exploratory study had two main objectives: (1) to determine if Internet search patterns could detect regularities in behavioral responses to Russian CCM policy at the population level, and (2) to determine if complementary data sources could explain the regularities we observed. Methods First, we obtained krokodil-related search pattern data for each Russia subregion (oblast) between 2011 and 2012. Second, we analyzed several complementary data sources included krokodil-related court cases, and related search terms on both Google and Yandex to evaluate the characteristics of terms accompanying krokodil-related search queries. Results In the 6 months preceding CCM sales restrictions, 21 of Russia's 83 oblasts had search rates higher than the national average (mean) of 16.67 searches per 100,000 population for terms associated with krokodil. In the 6 months following restrictions, mean national searches dropped to 9.65 per 100,000. Further, the number of oblasts recording a higher than average search rate dropped from 30 to 16. Second, we found krokodil-related court appearances were moderately positively correlated (Spearman correlation=.506, P≤.001) with behaviors consistent with an interest in the production and use of krokodil across Russia. Finally, Google Trends and Google and Yandex related terms suggested consistent public interest in the production and use of krokodil as well as for CCM as analgesic medication during the date range covered by this study. Conclusions Illicit drug use data are generally regarded as difficult to obtain through traditional survey methods. Our analysis suggests it is plausible that Yandex search behavior served as a proxy for patterns of krokodil production and use during the date range we investigated. More generally, this study demonstrates the application of novel methods recently used by policy makers to both monitor illicit drug use and influence drug policy decision making. PMID:25236385

  6. The evolution of methods for noise prediction of high speed rotors and propellers in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1986-01-01

    Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frayce, D.; Khayat, R.E.; Derdouri, A.

    The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less

  8. Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goncharov, Vasily; Hall, Gregory

    Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less

  9. Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode

    DOE PAGES

    Goncharov, Vasily; Hall, Gregory

    2016-08-25

    Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less

  10. Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.

    PubMed

    Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A

    2016-08-01

    Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.

  11. Validation of two innovative methods to measure contaminant mass flux in groundwater

    NASA Astrophysics Data System (ADS)

    Goltz, Mark N.; Close, Murray E.; Yoon, Hyouk; Huang, Junqi; Flintoft, Mark J.; Kim, Sehjong; Enfield, Carl

    2009-04-01

    The ability to quantify the mass flux of a groundwater contaminant that is leaching from a source area is critical to enable us to: (1) evaluate the risk posed by the contamination source and prioritize cleanup, (2) evaluate the effectiveness of source remediation technologies or natural attenuation processes, and (3) quantify a source term for use in models that may be applied to predict maximum contaminant concentrations in downstream wells. Recently, a number of new methods have been developed and subsequently applied to measure contaminant mass flux in groundwater in the field. However, none of these methods has been validated at larger than the laboratory-scale through a comparison of measured mass flux and a known flux that has been introduced into flowing groundwater. A couple of innovative flux measurement methods, the tandem circulation well (TCW) and modified integral pumping test (MIPT) methods, have recently been proposed. The TCW method can measure mass flux integrated over a large subsurface volume without extracting water. The TCW method may be implemented using two different techniques. One technique, the multi-dipole technique, is relatively simple and inexpensive, only requiring measurement of heads, while the second technique requires conducting a tracer test. The MIPT method is an easily implemented method of obtaining volume-integrated flux measurements. In the current study, flux measurements obtained using these two methods are compared with known mass fluxes in a three-dimensional, artificial aquifer. Experiments in the artificial aquifer show that the TCW multi-dipole and tracer test techniques accurately estimated flux, within 2% and 16%, respectively; although the good results obtained using the multi-dipole technique may be fortuitous. The MIPT method was not as accurate as the TCW method, underestimating flux by as much as 70%. MIPT method inaccuracies may be due to the fact that the method assumptions (two-dimensional steady groundwater flow to fully-screened wells) were not well-approximated. While fluxes measured using the MIPT method were consistently underestimated, the method's simplicity and applicability to the field may compensate for the inaccuracies that were observed in this artificial aquifer test.

  12. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  13. Biomedical word sense disambiguation with ontologies and metadata: automation meets accuracy

    PubMed Central

    Alexopoulou, Dimitra; Andreopoulos, Bill; Dietze, Heiko; Doms, Andreas; Gandon, Fabien; Hakenberg, Jörg; Khelif, Khaled; Schroeder, Michael; Wächter, Thomas

    2009-01-01

    Background Ontology term labels can be ambiguous and have multiple senses. While this is no problem for human annotators, it is a challenge to automated methods, which identify ontology terms in text. Classical approaches to word sense disambiguation use co-occurring words or terms. However, most treat ontologies as simple terminologies, without making use of the ontology structure or the semantic similarity between terms. Another useful source of information for disambiguation are metadata. Here, we systematically compare three approaches to word sense disambiguation, which use ontologies and metadata, respectively. Results The 'Closest Sense' method assumes that the ontology defines multiple senses of the term. It computes the shortest path of co-occurring terms in the document to one of these senses. The 'Term Cooc' method defines a log-odds ratio for co-occurring terms including co-occurrences inferred from the ontology structure. The 'MetaData' approach trains a classifier on metadata. It does not require any ontology, but requires training data, which the other methods do not. To evaluate these approaches we defined a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The 'MetaData' approach performed best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The 'Term Cooc' approach performs better on Gene Ontology (92% success) than on MeSH (73% success) as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The 'Closest Sense' approach achieves on average 80% success rate. Conclusion Metadata is valuable for disambiguation, but requires high quality training data. Closest Sense requires no training, but a large, consistently modelled ontology, which are two opposing conditions. Term Cooc achieves greater 90% success given a consistently modelled ontology. Overall, the results show that well structured ontologies can play a very important role to improve disambiguation. Availability The three benchmark datasets created for the purpose of disambiguation are available in Additional file 1. PMID:19159460

  14. Terms used by nurses to describe patient problems: can SNOMED III represent nursing concepts in the patient record?

    PubMed Central

    Henry, S B; Holzemer, W L; Reilly, C A; Campbell, K E

    1994-01-01

    OBJECTIVE: To analyze the terms used by nurses in a variety of data sources and to test the feasibility of using SNOMED III to represent nursing terms. DESIGN: Prospective research design with manual matching of terms to the SNOMED III vocabulary. MEASUREMENTS: The terms used by nurses to describe patient problems during 485 episodes of care for 201 patients hospitalized for Pneumocystis carinii pneumonia were identified. Problems from four data sources (nurse interview, intershift report, nursing care plan, and nurse progress note/flowsheet) were classified based on the substantive area of the problem and on the terminology used to describe the problem. A test subset of the 25 most frequently used terms from the two written data sources (nursing care plan and nurse progress note/flowsheet) were manually matched to SNOMED III terms to test the feasibility of using that existing vocabulary to represent nursing terms. RESULTS: Nurses most frequently described patient problems as signs/symptoms in the verbal nurse interview and intershift report. In the written data sources, problems were recorded as North American Nursing Diagnosis Association (NANDA) terms and signs/symptoms with similar frequencies. Of the nursing terms in the test subset, 69% were represented using one or more SNOMED III terms. PMID:7719788

  15. A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags

    NASA Astrophysics Data System (ADS)

    Meng, S.; Xie, X.

    2015-12-01

    In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.

  16. Characterisation of exposure to non-ionising electromagnetic fields in the Spanish INMA birth cohort: study protocol.

    PubMed

    Gallastegi, Mara; Guxens, Mònica; Jiménez-Zabala, Ana; Calvente, Irene; Fernández, Marta; Birks, Laura; Struchen, Benjamin; Vrijheid, Martine; Estarlich, Marisa; Fernández, Mariana F; Torrent, Maties; Ballester, Ferrán; Aurrekoetxea, Juan J; Ibarluzea, Jesús; Guerra, David; González, Julián; Röösli, Martin; Santa-Marina, Loreto

    2016-02-18

    Analysis of the association between exposure to electromagnetic fields of non-ionising radiation (EMF-NIR) and health in children and adolescents is hindered by the limited availability of data, mainly due to the difficulties on the exposure assessment. This study protocol describes the methodologies used for characterising exposure of children to EMF-NIR in the INMA (INfancia y Medio Ambiente- Environment and Childhood) Project, a prospective cohort study. Indirect (proximity to emission sources, questionnaires on sources use and geospatial propagation models) and direct methods (spot and fixed longer-term measurements and personal measurements) were conducted in order to assess exposure levels of study participants aged between 7 and 18 years old. The methodology used varies depending on the frequency of the EMF-NIR and the environment (homes, schools and parks). Questionnaires assessed the use of sources contributing both to Extremely Low Frequency (ELF) and Radiofrequency (RF) exposure levels. Geospatial propagation models (NISMap) are implemented and validated for environmental outdoor sources of RFs using spot measurements. Spot and fixed longer-term ELF and RF measurements were done in the environments where children spend most of the time. Moreover, personal measurements were taken in order to assess individual exposure to RF. The exposure data are used to explore their relationships with proximity and/or use of EMF-NIR sources. Characterisation of the EMF-NIR exposure by this combination of methods is intended to overcome problems encountered in other research. The assessment of exposure of INMA cohort children and adolescents living in different regions of Spain to the full frequency range of EMF-NIR extends the characterisation of environmental exposures in this cohort. Together with other data obtained in the project, on socioeconomic and family characteristics and development of the children and adolescents, this will enable to evaluate the complex interaction between health outcomes in children and adolescents and the various environmental factors that surround them.

  17. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  18. LWIR pupil imaging and longer-term calibration stability

    NASA Astrophysics Data System (ADS)

    LeVan, Paul D.; Sakoglu, Ünal

    2016-09-01

    A previous paper described LWIR pupil imaging, and an improved understanding of the behavior of this type of sensor for which the high-sensitivity focal plane array (FPA) operated at higher flux levels includes a reversal in signal integration polarity. We have since considered a candidate methodology for efficient, long-term calibration stability that exploits the following two properties of pupil imaging: (1) a fixed pupil position on the FPA, and (2) signal levels from the scene imposed on significant but fixed LWIR background levels. These two properties serve to keep each pixel operating over a limited dynamic range that corresponds to its location in the pupil and to the signal levels generated at this location by the lower and upper calibration flux levels. Exploiting this property for which each pixel of the Pupil Imager operates over its limited dynamic range, the signal polarity reversal between low and high flux pixels, which occurs for a circular region of pixels near the upper edges of the pupil illumination profile, can be rectified to unipolar integration with a two-level non-uniformity correction (NUC). Images corrected real-time with standard non-uniformity correction (NUC) techniques, are still subject to longer-term drifts in pixel offsets between recalibrations. Long-term calibration stability might then be achieved using either a scene-based non-uniformity correction approach, or with periodic repointing for off-source background estimation and subtraction. Either approach requires dithering of the field of view, by sub-pixel amounts for the first method, or by large off-source motions outside the 0.38 milliradian FOV for the latter method. We report on the results of investigations along both these lines.

  19. Electricity generation and health.

    PubMed

    Markandya, Anil; Wilkinson, Paul

    2007-09-15

    The provision of electricity has been a great benefit to society, particularly in health terms, but it also carries health costs. Comparison of different forms of commercial power generation by use of the fuel cycle methods developed in European studies shows the health burdens to be greatest for power stations that most pollute outdoor air (those based on lignite, coal, and oil). The health burdens are appreciably smaller for generation from natural gas, and lower still for nuclear power. This same ranking also applies in terms of greenhouse-gas emissions and thus, potentially, to long-term health, social, and economic effects arising from climate change. Nuclear power remains controversial, however, because of public concern about storage of nuclear waste, the potential for catastrophic accident or terrorist attack, and the diversion of fissionable material for weapons production. Health risks are smaller for nuclear fusion, but commercial exploitation will not be achieved in time to help the crucial near-term reduction in greenhouse-gas emissions. The negative effects on health of electricity generation from renewable sources have not been assessed as fully as those from conventional sources, but for solar, wind, and wave power, such effects seem to be small; those of biofuels depend on the type of fuel and the mode of combustion. Carbon dioxide (CO2) capture and storage is increasingly being considered for reduction of CO2 emissions from fossil fuel plants, but the health effects associated with this technology are largely unquantified and probably mixed: efficiency losses mean greater consumption of the primary fuel and accompanying increases in some waste products. This paper reviews the state of knowledge regarding the health effects of different methods of generating electricity.

  20. Immobilized aptamer paper spray ionization source for ion mobility spectrometry.

    PubMed

    Zargar, Tahereh; Khayamian, Taghi; Jafari, Mohammad T

    2017-01-05

    A selective thin-film microextraction based on aptamer immobilized on cellulose paper was used as a paper spray ionization source for ion mobility spectrometry (PSI-IMS), for the first time. In this method, the paper is not only used as an ionization source but also it is utilized for the selective extraction of analyte, based on immobilized aptamer. This combination integrates both sample preparation and analyte ionization in a Whatman paper. To that end, an appropriate sample introduction system with a novel design was constructed for the paper spray ionization source. Using this system, a continuous solvent flow works as an elution and spray solvent simultaneously. In this method, analyte is adsorbed on a triangular paper with immobilized aptamer and then it is desorbed and ionized by elution solvent and applied high voltage on paper, respectively. The effects of different experimental parameters such as applied voltage, angle of paper tip, distance between paper tip and counter electrode, elution solvent type, and solvent flow rate were optimized. The proposed method was exhaustively validated in terms of sensitivity and reproducibility by analyzing the standard solutions of codeine and acetamiprid. The analytical results obtained are promising enough to ensure the use of immobilized aptamer paper-spray as both the extraction and ionization techniques in IMS for direct analysis of biomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Methods Used to Assess the Susceptibility to Contamination of Transient, Non-Community Public Ground-Water Supplies in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Cohen, David A.

    2006-01-01

    The Safe Water Drinking Act of 1974 as amended in 1996 gave each State the responsibility of developing a Source-Water Assessment Plan (SWAP) that is designed to protect public-water supplies from contamination. Each SWAP must include three elements: (1) a delineation of the source-water protection area, (2) an inventory of potential sources of contaminants within the area, and (3) a determination of the susceptibility of the public-water supply to contamination from the inventoried sources. The Indiana Department of Environmental Management (IDEM) was responsible for preparing a SWAP for all public-water supplies in Indiana, including about 2,400 small public ground-water supplies that are designated transient, non-community (TNC) supplies. In cooperation with IDEM, the U.S. Geological Survey compiled information on conditions near the TNC supplies and helped IDEM complete source-water assessments for each TNC supply. The delineation of a source-water protection area (called the assessment area) for each TNC ground-water supply was defined by IDEM as a circular area enclosed by a 300-foot radius centered at the TNC supply well. Contaminants of concern (COCs) were defined by IDEM as any of the 90 contaminants for which the U.S. Environmental Protection Agency has established primary drinking-water standards. Two of these, nitrate as nitrogen and total coliform bacteria, are Indiana State-regulated contaminants for TNC water supplies. IDEM representatives identified potential point and nonpoint sources of COCs within the assessment area, and computer database retrievals were used to identify potential point sources of COCs in the area outside the assessment area. Two types of methods-subjective and subjective hybrid-were used in the SWAP to determine susceptibility to contamination. Subjective methods involve decisions based upon professional judgment, prior experience, and (or) the application of a fundamental understanding of processes without the collection and analysis of data for a specific condition. Subjective hybrid methods combine subjective methods with quantitative hydrologic analyses. The subjective methods included an inventory of potential sources and associated contaminants, and a qualitative description of the inherent susceptibility of the area around the TNC supply. The description relies on a classification of the hydrogeologic and geomorphic characteristics of the general area around the TNC supply in terms of its surficial geology, regional aquifer system, the occurrence of fine- and coarse-grained geologic materials above the screen of the TNC well, and the potential for infiltration of contaminants. The subjective hybrid method combined the results of a logistic regression analysis with a subjective analysis of susceptibility and a subjective set of definitions that classify the thickness of fine-grained geologic materials above the screen of a TNC well in terms of impedance to vertical flow. The logistic regression determined the probability of elevated concentrations of nitrate as nitrogen (greater than or equal to 3 milligrams per liter) in ground water associated with specific thicknesses of fine-grained geologic materials above the screen of a TNC well. In this report, fine-grained geologic materials are referred to as a geologic barrier that generally impedes vertical flow through an aquifer. A geologic barrier was defined to be thin for fine-grained materials between 0 and 45 feet thick, moderate for materials between 45 and 75 feet thick, and thick if the fine-grained materials were greater than 75 feet thick. A flow chart was used to determine the susceptibility rating for each TNC supply. The flow chart indicated a susceptibility rating using (1) concentrations of nitrate as nitrogen and total coliform bacteria reported from routine compliance monitoring of the TNC supply, (2) the presence or absence of potential sources of regulated contaminants (nitrate as nitrogen and coliform bac

  2. Influence of regeneration method and tissue source on the frequency of somatic variation in Populus to infection by Septoria musiva

    Treesearch

    Michael E. Ostry; Ronald L. Hackett; Charles H. Michler; R. Serres; B. McCown

    1994-01-01

    Septoria leaf spot and canker are serious diseases of many hybrid poplar clones in plantations established for biomass production. Developing resistant clones through breeding is the best long-term strategy to minimize tree damage caused by this disease. Tissue culture and somaclonal selection techniques may reduce the time needed to develop disease resistance in...

  3. Guidelines for Analysis of Indigeneous and Private Health Care Planning in Developing Countries. International Health Planning Methods Series, Volume 6.

    ERIC Educational Resources Information Center

    Scrimshaw, Susan

    This guidebook is both a practical tool and a source book to aid health planners assess the importance, extent, and impact of indigenous and private sector medical systems in developing nations. Guidelines are provided for assessment in terms of: use patterns; the meaning and importance to users of various available health services; and ways of…

  4. Altering the Prosodic Features of Motherese to Promote Joint Attention in Language-Delayed Children. EBP Briefs. Volume 12, Issue 1

    ERIC Educational Resources Information Center

    Fredman, Traci

    2017-01-01

    Clinical Question: For children ages birth to 3 years diagnosed with a language delay or disorder, to what extent does the prosodic component of motherese aid in establishing joint attention (JA)? Method: Systematic Review. Study Sources: ASHA, Web of Science, CINAHL, MEDLINE, EBSCO, PubMed, PsycINFO, and ERIC. Search Terms: motherese, infant…

  5. Effects of Voice Coding and Speech Rate on a Synthetic Speech Display in a Telephone Information System

    DTIC Science & Technology

    1988-05-01

    Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice

  6. The Effects of Technology-Assisted Instruction to Improve Phonological-Awareness Skills in Children with Reading Difficulties: A Systematic Review. EBP Briefs. Volume 8, Issue 1

    ERIC Educational Resources Information Center

    Lee, Sue Ann S.; Sancibrian, Sherry; Ahlfinger, Nicole

    2013-01-01

    Clinical Question: For preschool and school-age children with or at risk for reading difficulties, does technology-assisted instruction lead to better phonological-awareness (PA) skills than instruction without technology? Method: Systematic Review Sources: ERIC, PsychInfo, CINAHL, and ASHA journal search Search Terms: phonological awareness,…

  7. Teaching Reading to Youth with Fragile X Syndrome: Should Phonemic Awareness and Phonics Instruction Be Used? EBP Briefs. Volume 9, Issue 6

    ERIC Educational Resources Information Center

    Brazendale, Allison; Adlof, Suzanne; Klusek, Jessica; Roberts, Jane

    2015-01-01

    Clinical Question: Would a child with fragile X syndrome benefit more from phonemic awareness and phonics instruction or whole-word training to increase reading skills? Method: Systematic review. Study Sources: PsycINFO. Search Terms: fragile X OR Down syndrome OR cognitive impairment OR cognitive deficit OR cognitive disability OR intellectual…

  8. Measuring the scale dependence of intrinsic alignments using multiple shear estimates

    NASA Astrophysics Data System (ADS)

    Leonard, C. Danielle; Mandelbaum, Rachel

    2018-06-01

    We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.

  9. Detection of spatial fluctuations of non-point source fecal pollution in coral reef surrounding waters in southwestern Puerto Rico using PCR-based assays.

    PubMed

    Bonkosky, M; Hernández-Delgado, E A; Sandoz, B; Robledo, I E; Norat-Ramírez, J; Mattei, H

    2009-01-01

    Human fecal contamination of coral reefs is a major cause of concern. Conventional methods used to monitor microbial water quality cannot be used to discriminate between different fecal pollution sources. Fecal coliforms, enterococci, and human-specific Bacteroides (HF183, HF134), general Bacteroides-Prevotella (GB32), and Clostridium coccoides group (CP) 16S rDNA PCR assays were used to test for the presence of non-point source fecal contamination across the southwestern Puerto Rico shelf. Inshore waters were highly turbid, consistently receiving fecal pollution from variable sources, and showing the highest frequency of positive molecular marker signals. Signals were also detected at offshore waters in compliance with existing microbiological quality regulations. Phylogenetic analysis showed that most isolates were of human fecal origin. The geographic extent of non-point source fecal pollution was large and impacted extensive coral reef systems. This could have deleterious long-term impacts on public health, local fisheries and in tourism potential if not adequately addressed.

  10. Multiwavelength pyrometer for gray and non-gray surfaces in the presence of interfering radiation

    NASA Technical Reports Server (NTRS)

    Ng, Daniel L. P. (Inventor)

    1994-01-01

    A method and apparatus for detecting the temperature of gray and non-gray bodies in the presence of interfering radiation are presented. A gray body has a constant emissivity less than 1 and a non-gray body has an emissivity which varies with wavelength. The emissivity and reflectivity of the surface is determined over a range of wavelengths. Spectra are also measured of the extraneous interference radiation source and the surface of the object to be measured in the presence of the extraneous interference radiation source. An auxiliary radiation source is used to determine the reflectivity of the surface and also the emissivity. The measured spectrum of the surfaces in the presence of the extraneous interference radiation source is set equal to the emissivity of the surface multiplied by a Planck function containing a temperature term T plus the surface reflectivity multiplied by the spectrum of the extraneous interference radiation source. The equation is then solved for T to determine the temperature of the surface.

  11. A UMLS-based spell checker for natural language processing in vaccine safety

    PubMed Central

    Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C

    2007-01-01

    Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907

  12. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  13. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  14. SNAP 19 Pioneer F and G. Final Report

    DOE R&D Accomplishments Database

    1973-06-01

    The generator developed for the Pioneer mission evolved from the SNAP 19 RTG`s launched aboard the NIMBUS III spacecraft. In order to satisfy the power requirements and environment of earth escape trajectory, significant modifications were made to the thermoelectric converter, heat source, and structural configuration. Specifically, a TAGS 2N thermoelectric couple was designed to provide higher efficiency and improved long term power performance, and the electrical circuitry was modified to yield very low magnetic field from current flow in the RTG. A new heat source was employed to satisfy operational requirements and its integration with the generator required alteration to the method of providing support to the fuel capsule.

  15. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.

  16. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  17. Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.

    PubMed

    Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian

    2015-12-16

    Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.

  18. A new hue capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies

    NASA Technical Reports Server (NTRS)

    Camci, C.; Kim, K.; Hippensteele, S. A.

    1992-01-01

    A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.

  19. Long Term 2 Second Round Source Water Monitoring and Bin Placement Memo

    EPA Pesticide Factsheets

    The Long Term 2 Enhanced Surface Water Treatment Rule (LT2ESWTR) applies to all public water systems served by a surface water source or public water systems served by a ground water source under the direct influence of surface water.

  20. Efficient RF energy harvesting by using a fractal structured rectenna system

    NASA Astrophysics Data System (ADS)

    Oh, Sechang; Ramasamy, Mouli; Varadan, Vijay K.

    2014-04-01

    A rectenna system delivers, collects, and converts RF energy into direct current to power the electronic devices or recharge batteries. It consists of an antenna for receiving RF power, an input filter for processing energy and impedance matching, a rectifier, an output filter, and a load resistor. However, the conventional rectenna systems have drawback in terms of power generation, as the single resonant frequency of an antenna can generate only low power compared to multiple resonant frequencies. A multi band rectenna system is an optimal solution to generate more power. This paper proposes the design of a novel rectenna system, which involves developing a multi band rectenna with a fractal structured antenna to facilitate an increase in energy harvesting from various sources like Wi-Fi, TV signals, mobile networks and other ambient sources, eliminating the limitation of a single band technique. The usage of fractal antennas effects certain prominent advantages in terms of size and multiple resonances. Even though, a fractal antenna incorporates multiple resonances, controlling the resonant frequencies is an important aspect to generate power from the various desired RF sources. Hence, this paper also describes the design parameters of the fractal antenna and the methods to control the multi band frequency.

  1. Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.

    PubMed

    de Groot-Hedlin, C D

    2012-08-01

    An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.

  2. Uncertainty, variability, and earthquake physics in ground‐motion prediction equations

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.

    2017-01-01

    Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20  km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.

  3. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  4. Method of identification of patent trends based on descriptions of technical functions

    NASA Astrophysics Data System (ADS)

    Korobkin, D. M.; Fomenkov, S. A.; Golovanchikov, A. B.

    2018-05-01

    The use of the global patent space to determine the scientific and technological priorities for the technical systems development (identifying patent trends) allows one to forecast the direction of the technical systems development and, accordingly, select patents of priority technical subjects as a source for updating the technical functions database and physical effects database. The authors propose an original method that uses as trend terms not individual unigrams or n-gram (usually for existing methods and systems), but structured descriptions of technical functions in the form “Subject-Action-Object” (SAO), which in the authors’ opinion are the basis of the invention.

  5. Critical analysis of active methods of ozone layer recovery

    NASA Astrophysics Data System (ADS)

    Bekker, S. Z.; Doronin, A. P.; Kozlov, S. I.

    2017-09-01

    A critical analysis is given for various methods for recovery of the ozone layer of the Earth: the emission of alkane gases, the destruction of freons by laser IR radiation and with microwave discharge, exposure to laser UV radiation and electric discharge in the atmosphere, the use of solar radiation, laser infrared radiation, and gamma rays, and the creation of an artificial formation at high altitudes that shields the solar radiation dissociating ozone. The optimal methods are discussed in terms of their effectiveness, economic costs, and environmental consequences. These include the use of gamma rays sources, electric discharge in the atmosphere, and microwave breakdown.

  6. Interlaboratory study of the ion source memory effect in 36Cl accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Pavetich, Stefan; Akhmadaliev, Shavkat; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Buchriegler, Josef; Golser, Robin; Keddadouche, Karim; Martschini, Martin; Merchel, Silke; Rugel, Georg; Steier, Peter

    2014-06-01

    Understanding and minimization of contaminations in the ion source due to cross-contamination and long-term memory effect is one of the key issues for accurate accelerator mass spectrometry (AMS) measurements of volatile elements. The focus of this work is on the investigation of the long-term memory effect for the volatile element chlorine, and the minimization of this effect in the ion source of the Dresden accelerator mass spectrometry facility (DREAMS). For this purpose, one of the two original HVE ion sources at the DREAMS facility was modified, allowing the use of larger sample holders having individual target apertures. Additionally, a more open geometry was used to improve the vacuum level. To evaluate this improvement in comparison to other up-to-date ion sources, an interlaboratory comparison had been initiated. The long-term memory effect of the four Cs sputter ion sources at DREAMS (two sources: original and modified), ASTER (Accélérateur pour les Sciences de la Terre, Environnement, Risques) and VERA (Vienna Environmental Research Accelerator) had been investigated by measuring samples of natural 35Cl/37Cl-ratio and samples highly-enriched in 35Cl (35Cl/37Cl ∼ 999). Besides investigating and comparing the individual levels of long-term memory, recovery time constants could be calculated. The tests show that all four sources suffer from long-term memory, but the modified DREAMS ion source showed the lowest level of contamination. The recovery times of the four ion sources were widely spread between 61 and 1390 s, where the modified DREAMS ion source with values between 156 and 262 s showed the fastest recovery in 80% of the measurements.

  7. An efficient hole-filling method based on depth map in 3D view generation

    NASA Astrophysics Data System (ADS)

    Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.

  8. HPLC-MS/MS method for dexmedetomidine quantification with Design of Experiments approach: application to pediatric pharmacokinetic study.

    PubMed

    Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta

    2017-02-01

    The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.

  9. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  10. Watershed nitrogen and phosphorus balance: The upper Potomac River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaworski, N.A.; Groffman, P.M.; Keller, A.A.

    1992-01-01

    Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less

  11. Toward multimodal signal detection of adverse drug reactions.

    PubMed

    Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H

    2017-12-01

    Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Multi-field query expansion is effective for biomedical dataset retrieval.

    PubMed

    Bouadjenek, Mohamed Reda; Verspoor, Karin

    2017-01-01

    In the context of the bioCADDIE challenge addressing information retrieval of biomedical datasets, we propose a method for retrieval of biomedical data sets with heterogenous schemas through query reformulation. In particular, the method proposed transforms the initial query into a multi-field query that is then enriched with terms that are likely to occur in the relevant datasets. We compare and evaluate two query expansion strategies, one based on the Rocchio method and another based on a biomedical lexicon. We then perform a comprehensive comparative evaluation of our method on the bioCADDIE dataset collection for biomedical retrieval. We demonstrate the effectiveness of our multi-field query method compared to two baselines, with MAP improved from 0.2171 and 0.2669 to 0.2996. We also show the benefits of query expansion, where the Rocchio expanstion method improves the MAP for our two baselines from 0.2171 and 0.2669 to 0.335. We show that the Rocchio query expansion method slightly outperforms the one based on the biomedical lexicon as a source of terms, with an improvement of roughly 3% for MAP. However, the query expansion method based on the biomedical lexicon is much less resource intensive since it does not require computation of any relevance feedback set or any initial execution of the query. Hence, in term of trade-off between efficiency, execution time and retrieval accuracy, we argue that the query expansion method based on the biomedical lexicon offers the best performance for a prototype biomedical data search engine intended to be used at a large scale. In the official bioCADDIE challenge results, although our approach is ranked seventh in terms of the infNDCG evaluation metric, it ranks second in term of P@10 and NDCG. Hence, the method proposed here provides overall good retrieval performance in relation to the approaches of other competitors. Consequently, the observations made in this paper should benefit the development of a Data Discovery Index prototype or the improvement of the existing one. © The Author(s) 2017. Published by Oxford University Press.

  13. Multi-field query expansion is effective for biomedical dataset retrieval

    PubMed Central

    2017-01-01

    Abstract In the context of the bioCADDIE challenge addressing information retrieval of biomedical datasets, we propose a method for retrieval of biomedical data sets with heterogenous schemas through query reformulation. In particular, the method proposed transforms the initial query into a multi-field query that is then enriched with terms that are likely to occur in the relevant datasets. We compare and evaluate two query expansion strategies, one based on the Rocchio method and another based on a biomedical lexicon. We then perform a comprehensive comparative evaluation of our method on the bioCADDIE dataset collection for biomedical retrieval. We demonstrate the effectiveness of our multi-field query method compared to two baselines, with MAP improved from 0.2171 and 0.2669 to 0.2996. We also show the benefits of query expansion, where the Rocchio expanstion method improves the MAP for our two baselines from 0.2171 and 0.2669 to 0.335. We show that the Rocchio query expansion method slightly outperforms the one based on the biomedical lexicon as a source of terms, with an improvement of roughly 3% for MAP. However, the query expansion method based on the biomedical lexicon is much less resource intensive since it does not require computation of any relevance feedback set or any initial execution of the query. Hence, in term of trade-off between efficiency, execution time and retrieval accuracy, we argue that the query expansion method based on the biomedical lexicon offers the best performance for a prototype biomedical data search engine intended to be used at a large scale. In the official bioCADDIE challenge results, although our approach is ranked seventh in terms of the infNDCG evaluation metric, it ranks second in term of P@10 and NDCG. Hence, the method proposed here provides overall good retrieval performance in relation to the approaches of other competitors. Consequently, the observations made in this paper should benefit the development of a Data Discovery Index prototype or the improvement of the existing one. PMID:29220457

  14. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  15. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  16. Pollution monitoring using networks of honey bees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromenshenk, J.J.; Dewart, M.L.; Thomas, J.M.

    1983-08-01

    Each year thousands of chemicals in large quantities are introduced into the global environment and the need for effective methods of monitoring these substances has steadily increased. Most monitoring programs rely upon instrumentation to measure specific contaminants in air, water, or soil. However, it has become apparent that humans and their environment are exposed to complex mixtures of chemicals rather than single entities. As our ability to detect ever smaller quantities of pollutants has increased, the biological significance of these findings has become more uncertain. Also, it is clear that monitoring efforts should shift from short-term studies of easily identifiablemore » sources in localized areas to long-term studies of multiple sources over widespread regions. Our investigations aim at providing better tools to meet these exigencies. Honey bees are discussed as an effective, long-term, self-sustaining system for monitoring environmental impacts. Our results indicate that the use of regional, and possibly national or international, capability can be realized with the aid of beekeepers in obtaining samples and conducting measurements. This approach has the added advantage of public involvement in environmental problem solving and protection of human health and environmental quality.« less

  17. A Promising Tool to Assess Long Term Public Health Effects of Natural Disasters: Combining Routine Health Survey Data and Geographic Information Systems to Assess Stunting after the 2001 Earthquake in Peru

    PubMed Central

    Rydberg, Henny; Marrone, Gaetano; Strömdahl, Susanne; von Schreeb, Johan

    2015-01-01

    Background Research on long-term health effects of earthquakes is scarce, especially in low- and middle-income countries, which are disproportionately affected by disasters. To date, progress in this area has been hampered by the lack of tools to accurately measure these effects. Here, we explored whether long-term public health effects of earthquakes can be assessed using a combination of readily available data sources on public health and geographic distribution of seismic activity. Methods We used childhood stunting as a proxy for public health effects. Data on stunting were attained from Demographic and Health Surveys. Earthquake data were obtained from U.S. Geological Survey’s ShakeMaps, geographic information system-based maps that divide earthquake affected areas into different shaking intensity zones. We combined these two data sources to categorize the surveyed children into different earthquake exposure groups, based on how much their area of residence was affected by the earthquake. We assessed the feasibility of the approach using a real earthquake case – an 8.4 magnitude earthquake that hit southern Peru in 2001. Results and conclusions Our results indicate that the combination of health survey data and disaster data may offer a readily accessible and accurate method for determining the long-term public health consequences of a natural disaster. Our work allowed us to make pre- and post- earthquake comparisons of stunting, an important indicator of the well-being of a society, as well as comparisons between populations with different levels of exposure to the earthquake. Furthermore, the detailed GIS based data provided a precise and objective definition of earthquake exposure. Our approach should be considered in future public health and disaster research exploring the long-term effects of earthquakes and potentially other natural disasters. PMID:26090999

  18. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  19. Reducing Sensor Noise in MEG and EEG Recordings Using Oversampled Temporal Projection.

    PubMed

    Larson, Eric; Taulu, Samu

    2018-05-01

    Here, we review the theory of suppression of spatially uncorrelated, sensor-specific noise in electro- and magentoencephalography (EEG and MEG) arrays, and introduce a novel method for suppression. Our method requires only that the signals of interest are spatially oversampled, which is a reasonable assumption for many EEG and MEG systems. Our method is based on a leave-one-out procedure using overlapping temporal windows in a mathematical framework to project spatially uncorrelated noise in the temporal domain. This method, termed "oversampled temporal projection" (OTP), has four advantages over existing methods. First, sparse channel-specific artifacts are suppressed while limiting mixing with other channels, whereas existing linear, time-invariant spatial operators can spread such artifacts to other channels with a spatial distribution which can be mistaken for one produced by an electrophysiological source. Second, OTP minimizes distortion of the spatial configuration of the data. During source localization (e.g., dipole fitting), many spatial methods require corresponding modification of the forward model to avoid bias, while OTP does not. Third, noise suppression factors at the sensor level are maintained during source localization, whereas bias compensation removes the denoising benefit for spatial methods that require such compensation. Fourth, OTP uses a time-window duration parameter to control the tradeoff between noise suppression and adaptation to time-varying sensor characteristics. OTP efficiently optimizes noise suppression performance while controlling for spatial bias of the signal of interest. This is important in applications where sensor noise significantly limits the signal-to-noise ratio, such as high-frequency brain oscillations.

  20. Vehicle routing for the eco-efficient collection of household plastic waste.

    PubMed

    Bing, Xiaoyun; de Keizer, Marlies; Bloemhof-Ruwaard, Jacqueline M; van der Vorst, Jack G A J

    2014-04-01

    Plastic waste is a special category of municipal solid waste. Plastic waste collection is featured with various alternatives of collection methods (curbside/drop-off) and separation methods (source-/post-separation). In the Netherlands, the collection routes of plastic waste are the same as those of other waste, although plastic is different than other waste in terms of volume to weight ratio. This paper aims for redesigning the collection routes and compares the collection options of plastic waste using eco-efficiency as performance indicator. Eco-efficiency concerns the trade-off between environmental impacts, social issues and costs. The collection problem is modeled as a vehicle routing problem. A tabu search heuristic is used to improve the routes. Collection alternatives are compared by a scenario study approach. Real distances between locations are calculated with MapPoint. The scenario study is conducted based on real case data of the Dutch municipality Wageningen. Scenarios are designed according to the collection alternatives with different assumptions in collection method, vehicle type, collection frequency and collection points, etc. Results show that the current collection routes can be improved in terms of eco-efficiency performance by using our method. The source-separation drop-off collection scenario has the best performance for plastic collection assuming householders take the waste to the drop-off points in a sustainable manner. The model also shows to be an efficient decision support tool to investigate the impacts of future changes such as alternative vehicle type and different response rates. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  1. Assessing the Impact of Source-Zone Remediation Efforts at the Contaminant-Plume Scale Through Analysis of Contaminant Mass Discharge

    PubMed Central

    Brusseau, M. L.; Hatton, J.; DiGuiseppi, W.

    2011-01-01

    The long-term impact of source-zone remediation efforts was assessed for a large site contaminated by trichloroethene. The impact of the remediation efforts (soil vapor extraction and in-situ chemical oxidation) was assessed through analysis of plume-scale contaminant mass discharge, which was measured using a high-resolution data set obtained from 23 years of operation of a large pump-and-treat system. The initial contaminant mass discharge peaked at approximately 7 kg/d, and then declined to approximately 2 kg/d. This latter value was sustained for several years prior to the initiation of source-zone remediation efforts. The contaminant mass discharge in 2010, measured several years after completion of the two source-zone remediation actions, was approximately 0.2 kg/d, which is ten times lower than the value prior to source-zone remediation. The time-continuous contaminant mass discharge data can be used to evaluate the impact of the source-zone remediation efforts on reducing the time required to operate the pump-and-treat system, and to estimate the cost savings associated with the decreased operational period. While significant reductions have been achieved, it is evident that the remediation efforts have not completely eliminated contaminant mass discharge and associated risk. Remaining contaminant mass contributing to the current mass discharge is hypothesized to comprise poorly-accessible mass in the source zones, as well as aqueous (and sorbed) mass present in the extensive lower-permeability units located within and adjacent to the contaminant plume. The fate of these sources is an issue of critical import to the remediation of chlorinated-solvent contaminated sites, and development of methods to address these sources will be required to achieve successful long-term management of such sites and to ultimately transition them to closure. PMID:22115080

  2. Time-Limited Psychotherapy With Adolescents

    PubMed Central

    Shefler, Gaby

    2000-01-01

    Short-term dynamic therapies, characterized by abbreviated lengths (10–40 sessions) and, in many cases, preset termination dates, have become more widespread in the past three decades. Short-term therapies are based on rapid psychodynamic diagnosis, a therapeutic focus, a rapidly formed therapeutic alliance, awareness of termination and separation processes, and the directive stance of the therapist. The emotional storm of adolescence, stemming from both developmental and psychopathological sources, leaves many adolescents in need of psychotherapy. Many adolescents in need of therapy resist long-term attachment and involvement in an ambiguous relationship, which they experience as a threat to their emerging sense of independence and separateness. Short-term dynamic therapy can be the treatment of choice for many adolescents because it minimizes these threats and is more responsive to their developmental needs. The article presents treatment and follow-up of a 17-year-old youth, using James Mann's time-limited psychotherapy method. PMID:10793128

  3. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  4. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  5. Use of Online Sources of Information by Dental Practitioners: Findings from The Dental Practice-Based Research Network

    PubMed Central

    Funkhouser, Ellen; Agee, Bonita S.; Gordan, Valeria V.; Rindal, D. Brad; Fellows, Jeffrey L.; Qvist, Vibeke; McClelland, Jocelyn; Gilbert, Gregg H.

    2013-01-01

    Objectives Estimate the proportion of dental practitioners who use online sources of information for practice guidance. Methods From a survey of 657 dental practitioners in The Dental Practice Based Research Network, four indicators of online use for practice guidance were calculated: read journals online, obtained continuing education (CDE) through online sources, rated an online source as most influential, and reported frequently using an online source for guidance. Demographics, journals read, and use of various sources of information for practice guidance in terms of frequency and influence were ascertained for each. Results Overall, 21% (n=138) were classified into one of the four indicators of online use: 14% (n=89) rated an online source as most influential and 13% (n=87) reported frequently using an online source for guidance; few practitioners (5%, n=34) read journals online, fewer (3%, n=17) obtained CDE through online sources. Use of online information sources varied considerably by region and practice characteristics. In general, the 4 indicators represented practitioners with as many differences as similarities to each other and to offline users. Conclusion A relatively small proportion of dental practitioners use information from online sources for practice guidance. Variation exists regarding practitioners’ use of online source resources and how they rate the value of offline information sources for practice guidance. PMID:22994848

  6. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  7. Ictal and interictal electric source imaging in presurgical evaluation: a prospective study.

    PubMed

    Sharma, Praveen; Scherg, Michael; Pinborg, Lars H; Fabricius, Martin; Rubboli, Guido; Pedersen, Birthe; Leffers, Anne-Mette; Uldall, Peter; Jespersen, Bo; Brennum, Jannick; Mølby Henriksen, Otto; Beniczky, Sándor

    2018-05-11

    Accurate localization of the epileptic focus is essential for surgical treatment of patients with drug- resistant epilepsy. EEG source imaging (ESI) is increasingly used in presurgical evaluation. However, most previous studies analysed interictal discharges. Prospective studies comparing feasibility and accuracy of interictal (II) and ictal (IC) ESI are lacking. We prospectively analysed long-term video EEG recordings (LTM) of patients admitted for presurgical evaluation. We performed ESI of II and IC signals, using two methods: equivalent current dipole (ECD) and distributed source model (DSM). LTM recordings employed the standard 25-electrode array (including inferior temporal electrodes). An age-matched template head-model was used for source analysis. Results were compared with intracranial recordings (ICR), conventional neuroimaging methods (MRI, PET, SPECT) and outcome one year after surgery. Eighty-seven consecutive patients were analysed. ECD gave a significantly higher proportion of patients with localised focal abnormalities (94%) compared to MRI (70%), PET (66%) and SPECT (64%). Agreement between the ESI methods and ICR was moderate to substantial (k=0.56-0.79). Fifty-four patients were operated (47 for more than one year ago) and 62% of them became seizure-free. Localization accuracy of II-ESI was 51% for DSM and 57% for ECD; for IC-ESI this was 51% (DSM) and 62% (ECD). The differences between the ESI methods were not significant. Differences in localization accuracy between ESI and MRI (55%), PET (33%) and SPECT (40%) were not significant. II and IC ESI of LTM-data have high feasibility and their localisation accuracy is similar to the conventional neuroimaging methods. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  9. VAN method of short-term earthquake prediction shows promise

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  10. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  11. Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.

    PubMed

    Karani, Hamid; Huber, Christian

    2015-02-01

    In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.

  12. Observational data on the effects of infection by the copepod Salmincola californiensis on the short- and long-term viability of juvenile Chinook salmon (Oncorhynchus tshawytscha) implanted with telemetry tags

    USGS Publications Warehouse

    Beeman, John W.; Hansen, Amy C.; Sprando, Jamie M.

    2015-01-01

    Infection with Salmincola californiensis is common in juvenile Chinook salmon in western USA reservoirs and may affect the viability of fish used in studies of telemetered animals. Our limited assessment suggests infection by Salmincola californiensis affects the short-term morality of tagged fish and may affect long-term viability of tagged fish after release; however, the intensity of infection in the sample population did not represent the source population due to the observational nature of the data. We suggest these results warrant further study into the effects of infection bySalmincola californiensis on the results obtained through active telemetry and perhaps other methods requiring handling of infected fish.

  13. A new aerodynamic integral equation based on an acoustic formula in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    An aerodynamic integral equation for bodies moving at transonic and supersonic speeds is presented. Based on a time-dependent acoustic formula for calculating the noise emanating from the outer portion of a propeller blade travelling at high speed (the Ffowcs Williams-Hawking formulation), the loading terms and a conventional thickness source terms are retained. Two surface and three line integrals are employed to solve an equation for the loading noise. The near-field term is regularized using the collapsing sphere approach to obtain semiconvergence on the blade surface. A singular integral equation is thereby derived for the unknown surface pressure, and is amenable to numerical solutions using Galerkin or collocation methods. The technique is useful for studying the nonuniform inflow to the propeller.

  14. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  15. Antimatter Production for Near-Term Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.

    1999-01-01

    The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities which could exist within the early part of next century. Results show that although it may be impractical to consider systems which rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-ten-n production capabilities. In fact, such systems could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $60 million per mission.

  16. Slicer Method Comparison Using Open-source 3D Printer

    NASA Astrophysics Data System (ADS)

    Ariffin, M. K. A. Mohd; Sukindar, N. A.; Baharudin, B. T. H. T.; Jaafar, C. N. A.; Ismail, M. I. S.

    2018-01-01

    Open-source 3D printer has been one of the popular choices in fabricating 3D models. This technology is easily accessible and low in cost. However, several studies have been made to improve the performance of this low-cost technology in term of the accuracy of the parts finish. This study is focusing on the selection of slicer mode between CuraEngine and Slic3r. The effect on this slicer has been observe in terms of accuracy and surface visualization. The result shows that if the accuracy is the top priority, CuraEngine is the better option to use as contribute more accuracy as well as less filament is needed compared to the Slice3r. Slice3r may be very useful for complicated parts such as hanging structure due to excessive material which act as support material. The study provides basic platform for the user to have an idea which option to be used in fabricating 3D model.

  17. Nodal Green’s Function Method Singular Source Term and Burnable Poison Treatment in Hexagonal Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A.A. Bingham; R.M. Ferrer; A.M. ougouag

    2009-09-01

    An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less

  18. What do popular Spanish women's magazines say about caesarean section? A 21-year survey

    PubMed Central

    Torloni, MR; Campos Mansilla, B; Merialdi, M; Betrán, AP

    2014-01-01

    Objectives Caesarean section (CS) rates are increasing worldwide and maternal request is cited as one of the main reasons for this trend. Women's preferences for route of delivery are influenced by popular media, including magazines. We assessed the information on CS presented in Spanish women's magazines. Design Systematic review. Setting Women's magazines printed from 1989 to 2009 with the largest national distribution. Sample Articles with any information on CS. Methods Articles were selected, read and abstracted in duplicate. Sources of information, scientific accuracy, comprehensiveness and women's testimonials were objectively extracted using a content analysis form designed for this study. Main outcome measures Accuracy, comprehensiveness and sources of information. Results Most (67%) of the 1223 selected articles presented exclusively personal opinion/birth stories, 12% reported the potential benefits of CS, 26% mentioned the short-term and 10% mentioned the long-term maternal risks, and 6% highlighted the perinatal risks of CS. The most frequent short-term risks were the increased time for maternal recovery (n = 86), frustration/feelings of failure (n = 83) and increased post-surgical pain (n = 71). The most frequently cited long-term risks were uterine rupture (n = 57) and the need for another CS in any subsequent pregnancy (n = 42). Less than 5% of the selected articles reported that CS could increase the risks of infection (n = 53), haemorrhage (n = 31) or placenta praevia/accreta in future pregnancies (n = 6). The sources of information were not reported by 68% of the articles. Conclusions The portrayal of CS in Spanish women's magazines is not sufficiently comprehensive and does not provide adequate important information to help the readership to understand the real benefits and risks of this route of delivery. PMID:24467797

  19. Anomalous Low States and Long Term Variability in the Black Hole Binary LMC X-3

    NASA Technical Reports Server (NTRS)

    Smale, Alan P.; Boyd, Patricia T.

    2012-01-01

    Rossi X-my Timing Explorer observations of the black hole binary LMC X-3 reveal an extended very low X-ray state lasting from 2003 December 13 until 2004 March 18, unprecedented both in terms of its low luminosity (>15 times fainter than ever before seen in this source) and long duration (approx 3 times longer than a typical low/hard state excursion). During this event little to no source variability is observed on timescales of approx hours-weeks, and the X-ray spectrum implies an upper limit of 1.2 x 10(exp 35) erg/s, Five years later another extended low state occurs, lasting from 2008 December 11 until 2009 June 17. This event lasts nearly twice as long as the first, and while significant variability is observed, the source remains reliably in the low/hard spectral state for the approx 188 day duration. These episodes share some characteristics with the "anomalous low states" in the neutron star binary Her X-I. The average period and amplitude of the Variability of LMC X-3 have different values between these episodes. We characterize the long-term variability of LMC X-3 before and after the two events using conventional and nonlinear time series analysis methods, and show that, as is the case in Her X-I, the characteristic amplitude of the variability is related to its characteristic timescale. Furthermore, the relation is in the same direction in both systems. This suggests that a similar mechanism gives rise to the long-term variability, which in the case of Her X-I is reliably modeled with a tilted, warped precessing accretion disk.

  20. LOOKING FOR GRANULATION AND PERIODICITY IMPRINTS IN THE SUNSPOT TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Ilídio; Silva, Hugo G., E-mail: ilidio.lopes@tecnico.ulisboa.pt, E-mail: hgsilva@uevora.pt

    2015-05-10

    The sunspot activity is the end result of the cyclic destruction and regeneration of magnetic fields by the dynamo action. We propose a new method to analyze the daily sunspot areas data recorded since 1874. By computing the power spectral density of daily data series using the Mexican hat wavelet, we found a power spectrum with a well-defined shape, characterized by three features. The first term is the 22 yr solar magnetic cycle, estimated in our work to be 18.43 yr. The second term is related to the daily volatility of sunspots. This term is most likely produced by themore » turbulent motions linked to the solar granulation. The last term corresponds to a periodic source associated with the solar magnetic activity, for which the maximum power spectral density occurs at 22.67 days. This value is part of the 22–27 day periodicity region that shows an above-average intensity in the power spectra. The origin of this 22.67 day periodic process is not clearly identified, and there is a possibility that it can be produced by convective flows inside the star. The study clearly shows a north–south asymmetry. The 18.43 yr periodical source is correlated between the two hemispheres, but the 22.67 day one is not correlated. It is shown that toward the large timescales an excess occurs in the northern hemisphere, especially near the previous two periodic sources. To further investigate the 22.67 day periodicity, we made a Lomb–Scargle spectral analysis. The study suggests that this periodicity is distinct from others found nearby.« less

  1. Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2003-01-01

    A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.

  2. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercialmore » spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.« less

  3. Development of a new, completely implantable intraventricular pressure meter and preliminary report of its clinical experience

    NASA Technical Reports Server (NTRS)

    Osaka, K.; Murata, T.; Okamoto, S.; Ohta, T.; Ozaki, T.; Maeda, T.; Mori, K.; Handa, H.; Matsumoto, S.; Sakaguchi, I.

    1982-01-01

    A completely implantable intracranial pressure sensor designed for long-term measurement of intraventricular pressure in hydrocephalic patients is described. The measurement principal of the device is discussed along with the electronic and component structure and sources of instrument error. Clinical tests of this implanted pressure device involving both humans and animals showed it to be comparable to other methods of intracranial pressure measurement.

  4. Verification of Methods for Assessing the Sustainability of Monitored Natural Attenuation (MNA)

    DTIC Science & Technology

    2013-01-01

    sugars TOC total organic carbon TSR thermal source removal USACE U.S. Army Corps of Engineers USEPA U.S. Environmental Protection Agency USGS...the SZD function for long-term DNAPL dissolution simulations. However, the sustainability assessment was easily implemented using an alternative...neutral sugars [THNS]). Chapelle et al. (2009) suggested THAA and THNS as measures of the bioavailability of organic carbon based on an analysis of

  5. Lattice properties of the Phase I BNL x-ray lithography source obtained from fits to magnetic measurement data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumberg, L.N.; Murphy, J.B.; Reusch, M.F.

    1991-01-01

    The orbit, tune, chromaticity and {beta} values for the Phase 1 XLS ring were computed by numerical integration of equations of motion using fields obtained from the coefficients of the 3-dimensional solution of Laplace's Equation evaluated by fits to magnetic measurements. The results are in good agreement with available data. The method has been extended to higher order fits of TOSCA generated fields in planes normal to the reference axis using the coil configuration proposed for the Superconducting X-Ray Lithography Source. Agreement with results from numerical integration through fields given directly by TOSCA is excellent. The formulation of the normalmore » multipole expansion presented by Brown and Servranckx has been extended to include skew multipole terms. The method appears appropriate for analysis of magnetic measurements of the SXLS. 8 refs. , 2 figs., 2 tabs.« less

  6. Myxococcus xanthus Growth, Development, and Isolation.

    PubMed

    Vaksman, Zalman; Kaplan, Heidi B

    2015-11-03

    Myxobacteria are a highly social group among the delta proteobacteria that display unique multicellular behaviors during their complex life cycle and provide a rare opportunity to study the boundary between single cells and multicellularity. These organisms are also unusual as their entire life cycle is surface associated and includes a number of social behaviors: social gliding and rippling motility, 'wolf-pack'-like predation, and self-organizing complex biostructures, termed fruiting bodies, which are filled with differentiated environmentally resistant spores. Here we present methods for the growth, maintenance, and storage of Myxococcus xanthus, the most commonly studied of the myxobacteria. We also include methods to examine various developmental and social behaviors (fruiting body and spore formation, predation, and rippling motility). As the myxobacteria, similar to the streptomycetes, are excellent sources of many characterized and uncharacterized antibiotics and other natural products, we have provided a protocol for obtaining natural isolates from a variety of environmental sources. Copyright © 2015 John Wiley & Sons, Inc.

  7. The organic analysis and carbon chemistry of lunar samples: Their significance for exobiology; Proceedings of the Conference, University of Maryland, College Park, Md., October 26-28, 1971.

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Various methods used in the organic analysis of lunar samples are reviewed. The scope, advantages, and limitations of these methods are discussed, with particular emphasis on possible sources of contamination and experimental artifacts inherent in their use. A broad survey of the organogenic elements and compounds found in lunar samples covers the search for biogenic structures and viable organisms; the abundance and isotopic composition of various elements and compounds; the search for porphyrins, amino acids, or amino acid precursors; and the presence of heterocylics, aromatic hydrocarbons, and other organic compounds. The sources of the organogenic elements and compounds detected in lunar samples are discussed. The significance of the lunar organic analysis for exobiology is discussed in terms of its relevance to and implications for the studies of chemical evolution and terrestrial organic geochemistry. Individual items are announced in this issue.

  8. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  9. NEXT GENERATION LEACHING TESTS FOR EVALUATING ...

    EPA Pesticide Factsheets

    In the U.S. as in other countries, there is increased interest in using industrial by-products as alternative or secondary materials, helping to conserve virgin or raw materials. The LEAF and associated test methods are being used to develop the source term for leaching or any inorganic constituents of potential concern (COPC) in determining what is environmentally acceptable. The leaching test methods include batch equilibrium, percolation column and semi-dynamic mass transport tests for monolithic and compacted granular materials. By testing over a range of values for pH, liquid/solid ratio, and physical form of the material, this approach allows one data set to be used to evaluate a range of management scenarios for a material, representing different environmental conditions (e.g., disposal or beneficial use). The results from these tests may be interpreted individually or integrated to identify a solid material’s characteristic leaching behavior. Furthermore the LEAF approach provides the ability to make meaningful comparisons of leaching between similar and dissimilar materials from national and worldwide origins. To present EPA's research under SHC to implement validated leaching tests referred to as the Leaching Environmental Assessment Framework (LEAF). The primary focus will be on the guidance for implementation of LEAF describing three case studies for developing source terms for evaluating inorganic constituents.

  10. On the renormalisation of the diffusion asymptotics in the problem of reflection of a narrow optical beam from a biological medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appanov, A Yu; Barabanenkov, Yu N

    2005-12-31

    An analytic hybrid method is considered for solving the stationary radiation transfer equation in the problem on reflection of a narrow laser beam from biological media such as the 2% aqueous solution of intralipid and erythrocyte suspension with the volume concentration (hematocrit) H=0.41. The method is based on the reciprocity of the Green function in the radiation transfer theory and on the iteration solution of the integral equation for this function. As a result, the ray intensity is represented as a sum of two terms. The first of them describes the contribution of finite-order scattering to the intensity of amore » beam diffusely reflected from the medium. The second term contains the explicit analytic expression for a spatially distributed effective source of diffuse radiation emerging from the deep layers of the medium to the surface. This approach substantially improves the diffusion approximation for the problem under study and allows one to obtain the uniform asymptotics of the reflection coefficient at the specified interval of distances between the radiation source and detector on the medium surface with the relative error within {+-}6% for the 2% intralipid emulsion and erythrocyte suspension (H=0.41). (radiation scattering)« less

  11. Village-Level Identification of Nitrate Sources: Collaboration of Experts and Local Population in Benin, Africa

    NASA Astrophysics Data System (ADS)

    Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.

    2005-12-01

    Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.

  12. Analysis of the application of selected physico-chemical methods in eliminating odor nuisance of municipal facilities

    NASA Astrophysics Data System (ADS)

    Miller, Urszula; Grzelka, Agnieszka; Romanik, Elżbieta; Kuriata, Magdalena

    2018-01-01

    Operation of municipal management facilities is inseparable from the problem of malodorous compounds emissions to the atmospheric air. In that case odor nuisance is related to the chemical composition of waste, sewage and sludge as well as to the activity of microorganisms whose products of life processes can be those odorous compounds. Significant reduction of odorant emission from many sources can be achieved by optimizing parameters and conditions of processes. However, it is not always possible to limit the formation of odorants. In such cases it is best to use appropriate deodorizing methods. The choice of the appropriate method is based on in terms of physical parameters, emission intensity of polluted gases and their composition, if it is possible to determine. Among the solutions used in municipal economy, there can be distinguished physico-chemical methods such as sorption and oxidation. In cases where the source of the emission is not encapsulated, odor masking techniques are used, which consists of spraying preparations that neutralize unpleasant odors. The paper presents the characteristics of selected methods of eliminating odor nuisance and evaluation of their applicability in municipal management facilities.

  13. The "Overdrive" Mode in the "Complete Vocal Technique": A Preliminary Study.

    PubMed

    Sundberg, Johan; Bitelli, Maddalena; Holmberg, Annika; Laaksonen, Ville

    2017-09-01

    "Complete Vocal Technique," or CVT, is an internationally widespread method for teaching voice. It classifies voicing into four types, referred to as "vocal modes," one of which is called "Overdrive." The physiological correlates of these types are unclear. This study presents an attempt to analyze its voice source and formant frequency characteristics. A male and a female expert of CVT sang a set of "Overdrive" and falsetto tones on the syllable /pᴂ/. The voice source could be analyzed by inverse filtering in the case of the male subject. Results showed that subglottal pressure, measured as the oral pressure during /p/ occlusion, was low in falsetto and high in "Overdrive", and it was strongly correlated with each of the voice source parameters. These correlations could be described in terms of equations. The deviations from these equations of the different voice source parameters for the various voice samples suggested that "Overdrive" phonation was produced with stronger vocal fold adduction than the falsetto tones. Further, the subject was also found to tune the first formant to the second partial in "Overdrive" tones. The results support the conclusion that the method used, to compensate for the influence of subglottal pressure on the voice source, seems promising to use for analyses of other CVT vocal modes and also for other types of phonation. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. 26 CFR 31.3401(a)(14)-1 - Group-term life insurance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 15 2010-04-01 2010-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...

  15. 26 CFR 31.3401(a)(14)-1 - Group-term life insurance.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 15 2011-04-01 2011-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...

  16. 26 CFR 31.3401(a)(14)-1 - Group-term life insurance.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 15 2012-04-01 2012-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...

  17. 26 CFR 31.3401(a)(14)-1 - Group-term life insurance.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 15 2014-04-01 2014-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...

  18. 26 CFR 31.3401(a)(14)-1 - Group-term life insurance.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 15 2013-04-01 2013-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...

  19. Fish-Eye Observing with Phased Array Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.

    The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.

  20. Emission of Sound from Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

Top