Sample records for source term model

  1. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  2. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  3. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  4. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  5. Low birth weight and air pollution in California: Which sources and components drive the risk?

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun

    2016-01-01

    Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  7. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  8. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  9. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms

    PubMed Central

    2014-01-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581

  10. A critical assessment of flux and source term closures in shallow water models with porosity for urban flood simulations

    NASA Astrophysics Data System (ADS)

    Guinot, Vincent

    2017-11-01

    The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.

  11. A New Unsteady Model for Dense Cloud Cavitation in Cryogenic Fluids

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Ahuja, Vineet

    2005-01-01

    Contents include the following: Background on thermal effects in cavitation. Physical properties of hydrogen. Multi-phase cavitation with thermal effect. Solution procedure. Cavitation model overview. Cavitation source terms. New cavitation model. Source term for bubble growth. One equation les model. Unsteady ogive simulations: liquid nitrogen. Unsteady incompressible flow in a pipe. Time averaged cavity length for NACA15 flowfield.

  12. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  13. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  14. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  15. Revisiting the radionuclide atmospheric dispersion event of the Chernobyl disaster - modelling sensitivity and data assimilation

    NASA Astrophysics Data System (ADS)

    Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor

    2013-04-01

    A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.

  16. A brief compendium of correlations and analytical formulae for the thermal field generated by a heat source embedded in porous and purely-conductive media

    NASA Astrophysics Data System (ADS)

    Conti, P.; Testi, D.; Grassi, W.

    2017-11-01

    This work reviews and compares suitable models for the thermal analysis of forced convection over a heat source in a porous medium. The set of available models refers to an infinite medium in which a fluid moves over different three heat source geometries: i.e. the moving infinite line source, the moving finite line source, and the moving infinite cylindrical source. In this perspective, the present work presents a plain and handy compendium of the above-mentioned models for forced external convection in porous media; besides, we propose a dimensionless analysis to figure out the reciprocal deviation among available models, helping the selection of the most suitable one in the specific case of interest. Under specific conditions, the advection term becomes ineffective in terms of heat transfer performances, allowing the use of purely-conductive models. For that reason, available analytical and numerical solutions for purely-conductive media are also reviewed and compared, again, by dimensionless criteria. Therefore, one can choose the simplest solution, with significant benefits in terms of computational effort and interpretation of the results. The main outcomes presented in the paper are: the conditions under which the system can be considered subject to a Darcy flow, the minimal distance beyond which the finite dimension of the heat source does not affect the thermal field, and the critical fluid velocity needed to have a significant contribution of the advection term in the overall heat transfer process.

  17. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  18. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  19. A modification of the Regional Nutrient Management model (ReNuMa) to identify long-term changes in riverine nitrogen sources

    NASA Astrophysics Data System (ADS)

    Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang

    2018-06-01

    Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.

  20. Phenomenological Modeling of Infrared Sources: Recent Advances

    NASA Technical Reports Server (NTRS)

    Leung, Chun Ming; Kwok, Sun (Editor)

    1993-01-01

    Infrared observations from planned space facilities (e.g., ISO (Infrared Space Observatory), SIRTF (Space Infrared Telescope Facility)) will yield a large and uniform sample of high-quality data from both photometric and spectroscopic measurements. To maximize the scientific returns of these space missions, complementary theoretical studies must be undertaken to interpret these observations. A crucial step in such studies is the construction of phenomenological models in which we parameterize the observed radiation characteristics in terms of the physical source properties. In the last decade, models with increasing degree of physical realism (in terms of grain properties, physical processes, and source geometry) have been constructed for infrared sources. Here we review current capabilities available in the phenomenological modeling of infrared sources and discuss briefly directions for future research in this area.

  1. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  2. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  3. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  4. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  5. Do forests represent a long-term source of contaminated particulate matter in the Fukushima Prefecture?

    PubMed

    Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier

    2016-12-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources

    NASA Astrophysics Data System (ADS)

    Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa

    2017-07-01

    The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.

  7. A new traffic model with a lane-changing viscosity term

    NASA Astrophysics Data System (ADS)

    Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng

    2015-09-01

    In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).

  8. On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel

    2018-05-01

    We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.

  9. A Comparison of Mathematical Models of Fish Mercury Concentration as a Function of Atmospheric Mercury Deposition Rate and Watershed Characteristics

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.

    2009-12-01

    Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.

  10. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  11. Source term evaluation model for high-level radioactive waste repository with decay chain build-up.

    PubMed

    Chopra, Manish; Sunny, Faby; Oza, R B

    2016-09-18

    A source term model based on two-component leach flux concept is developed for a high-level radioactive waste repository. The long-lived radionuclides associated with high-level waste may give rise to the build-up of activity because of radioactive decay chains. The ingrowths of progeny are incorporated in the model using Bateman decay chain build-up equations. The model is applied to different radionuclides present in the high-level radioactive waste, which form a part of decay chains (4n to 4n + 3 series), and the activity of the parent and daughter radionuclides leaching out of the waste matrix is estimated. Two cases are considered: one when only parent is present initially in the waste and another where daughters are also initially present in the waste matrix. The incorporation of in situ production of daughter radionuclides in the source is important to carry out realistic estimates. It is shown that the inclusion of decay chain build-up is essential to avoid underestimation of the radiological impact assessment of the repository. The model can be a useful tool for evaluating the source term of the radionuclide transport models used for the radiological impact assessment of high-level radioactive waste repositories.

  12. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.

  13. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  14. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  15. Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Svetlana A.

    2015-01-01

    Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.

  16. Hydrologic Source Term Processes and Models for the Clearwater and Wineskin Tests, Rainier Mesa, Nevada National Security Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carle, Steven F.

    2011-05-04

    This report describes the development, processes, and results of a hydrologic source term (HST) model for the CLEARWATER (U12q) and WINESKIN (U12r) tests located on Rainier Mesa, Nevada National Security Site, Nevada (Figure 1.1). Of the 61 underground tests (involving 62 unique detonations) conducted on Rainier Mesa (Area 12) between 1957 and 1992 (USDOE, 2015), the CLEARWATER and WINESKIN tests present many unique features that warrant a separate HST modeling effort from other Rainier Mesa tests.

  17. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  18. Soundscapes

    DTIC Science & Technology

    2014-09-30

    Soundscapes ...global oceanographic models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we...other types of sources. APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that

  19. Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, Mohammad; Satake, Kenji

    2017-10-01

    A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.

  20. Coupling long and short term decisions in the design of urban water supply infrastructure for added reliability and flexibility

    NASA Astrophysics Data System (ADS)

    Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.

    2016-12-01

    The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.

  1. Regulatory Technology Development Plan - Sodium Fast Reactor: Mechanistic Source Term – Trial Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-10-01

    The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less

  2. Numerical simulation of hydrothermal circulation in the Cascade Range, north-central Oregon

    USGS Publications Warehouse

    Ingebritsen, S.E.; Paulson, K.M.

    1990-01-01

    Alternate conceptual models to explain near-surface heat-flow observations in the central Oregon Cascade Range involve (1) an extensive mid-crustal magmatic heat source underlying both the Quaternary arc and adjacent older rocks or (2) a narrower deep heat source which is flanked by a relatively shallow conductive heat-flow anomaly caused by regional ground-water flow (the lateral-flow model). Relative to the mid-crustal heat source model, the lateral-flow model suggests a more limited geothermal resource base, but a better-defined exploration target. We simulated ground-water flow and heat transport through two cross sections trending west from the Cascade range crest in order to explore the implications of the two models. The thermal input for the alternate conceptual models was simulated by varying the width and intensity of a basal heat-flow anomaly and, in some cases, by introducing shallower heat sources beneath the Quaternary arc. Near-surface observations in the Breitenbush Hot Springs area are most readily explained in terms of lateral heat transport by regional ground-water flow; however, the deep thermal structure still cannot be uniquely inferred. The sparser thermal data set from the McKenzie River area can be explained either in terms of deep regional ground-water flow or in terms of a conduction-dominated system, with ground-water flow essentially confined to Quaternary rocks and fault zones.

  3. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator.

    PubMed

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-11-01

    In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.

  4. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  5. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  6. Circular current loops, magnetic dipoles and spherical harmonic analysis.

    USGS Publications Warehouse

    Alldredge, L.R.

    1980-01-01

    Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author

  7. Scattering in infrared radiative transfer: A comparison between the spectrally averaging model JURASSIC and the line-by-line model KOPRA

    NASA Astrophysics Data System (ADS)

    Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold

    2013-09-01

    The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.

  8. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  9. MODELING MINERAL NITROGEN EXPORT FROM A FOREST TERRESTRIAL ECOSYSTEM TO STREAMS

    EPA Science Inventory

    Terrestrial ecosystems are major sources of N pollution to aquatic ecosystems. Predicting N export to streams is a critical goal of non-point source modeling. This study was conducted to assess the effect of terrestrial N cycling on stream N export using long-term monitoring da...

  10. The Effect of Data Quality on Short-term Growth Model Projections

    Treesearch

    David Gartner

    2005-01-01

    This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...

  11. Short-term emergency response planning and risk assessment via an integrated modeling system for nuclear power plants in complex terrain

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Weng, Yu-Chi

    2013-03-01

    Short-term predictions of potential impacts from accidental release of various radionuclides at nuclear power plants are acutely needed, especially after the Fukushima accident in Japan. An integrated modeling system that provides expert services to assess the consequences of accidental or intentional releases of radioactive materials to the atmosphere has received wide attention. These scenarios can be initiated either by accident due to human, software, or mechanical failures, or from intentional acts such as sabotage and radiological dispersal devices. Stringent action might be required just minutes after the occurrence of accidental or intentional release. To fulfill the basic functions of emergency preparedness and response systems, previous studies seldom consider the suitability of air pollutant dispersion models or the connectivity between source term, dispersion, and exposure assessment models in a holistic context for decision support. Therefore, the Gaussian plume and puff models, which are only suitable for illustrating neutral air pollutants in flat terrain conditional to limited meteorological situations, are frequently used to predict the impact from accidental release of industrial sources. In situations with complex terrain or special meteorological conditions, the proposing emergency response actions might be questionable and even intractable to decisionmakers responsible for maintaining public health and environmental quality. This study is a preliminary effort to integrate the source term, dispersion, and exposure assessment models into a Spatial Decision Support System (SDSS) to tackle the complex issues for short-term emergency response planning and risk assessment at nuclear power plants. Through a series model screening procedures, we found that the diagnostic (objective) wind field model with the aid of sufficient on-site meteorological monitoring data was the most applicable model to promptly address the trend of local wind field patterns. However, most of the hazardous materials being released into the environment from nuclear power plants are not neutral pollutants, so the particle and multi-segment puff models can be regarded as the most suitable models to incorporate into the output of the diagnostic wind field model in a modern emergency preparedness and response system. The proposed SDSS illustrates the state-of-the-art system design based on the situation of complex terrain in South Taiwan. This system design of SDSS with 3-dimensional animation capability using a tailored source term model in connection with ArcView® Geographical Information System map layers and remote sensing images is useful for meeting the design goal of nuclear power plants located in complex terrain.

  12. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  13. On the numerical treatment of nonlinear source terms in reaction-convection equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.

  14. Analysis of an entrainment model of the jet in a crossflow

    NASA Technical Reports Server (NTRS)

    Chang, H. S.; Werner, J. E.

    1972-01-01

    A theoretical model has been proposed for the problem of a round jet in an incompressible cross-flow. The method of matched asymptotic expansions has been applied to this problem. For the solution to the flow problem in the inner region, the re-entrant wake flow model was used with the re-entrant flow representing the fluid entrained by the jet. Higher order corrections are obtained in terms of this basic solution. The perturbation terms in the outer region was found to be a line distribution of doublets and sources. The line distribution of sources represents the combined effect of the entrainment and the displacement.

  15. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  16. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator

    PubMed Central

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-01-01

    Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850

  17. Ancient Glass: A Literature Search and its Role in Waste Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Pierce, Eric M.

    2010-07-01

    When developing a performance assessment model for the long-term disposal of immobilized low-activity waste (ILAW) glass, it is desirable to determine the durability of glass forms over very long periods of time. However, testing is limited to short time spans, so experiments are performed under conditions that accelerate the key geochemical processes that control weathering. Verification that models currently being used can reliably calculate the long term behavior ILAW glass is a key component of the overall PA strategy. Therefore, Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to evaluate alternative strategies that can be usedmore » for PA source term model validation. One viable alternative strategy is the use of independent experimental data from archaeological studies of ancient or natural glass contained in the literature. These results represent a potential independent experiment that date back to approximately 3600 years ago or 1600 before the current era (bce) in the case of ancient glass and 106 years or older in the case of natural glass. The results of this literature review suggest that additional experimental data may be needed before the result from archaeological studies can be used as a tool for model validation of glass weathering and more specifically disposal facility performance. This is largely because none of the existing data set contains all of the information required to conduct PA source term calculations. For example, in many cases the sediments surrounding the glass was not collected and analyzed; therefore having the data required to compare computer simulations of concentration flux is not possible. This type of information is important to understanding the element release profile from the glass to the surrounding environment and provides a metric that can be used to calibrate source term models. Although useful, the available literature sources do not contain the required information needed to simulate the long-term performance of nuclear waste glasses in a near-surface or deep geologic repositories. The information that will be required include 1) experimental measurements to quantify the model parameters, 2) detailed analyses of altered glass samples, and 3) detailed analyses of the sediment surrounding the ancient glass samples.« less

  18. Gravitational wave source counts at high redshift and in models with extra dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es

    2016-07-01

    Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less

  19. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  20. Sources of Uncertainty and the Interpretation of Short-Term Fluctuations

    NASA Astrophysics Data System (ADS)

    Lewandowsky, S.; Risbey, J.; Cowtan, K.; Rahmstorf, S.

    2016-12-01

    The alleged significant slowdown in global warming during the first decade of the 21st century, and the appearance of a discrepancy between models and observations, has attracted considerable research attention. We trace the history of this research and show how its conclusions were shaped by several sources of uncertainty and ambiguity about models and observations. We show that as those sources of uncertainty were gradually eliminated by further research, insufficient evidence remained to infer any discrepancy between models and observations or a significant slowing of warming. Specifically, we show that early research had to contend with uncertainties about coverage biases in the global temperature record and biases in the sea surface temperature observations which turned out to have exaggerated the extent of slowing. In addition, uncertainties in the observed forcings were found to have exaggerated the mismatch between models and observations. Further sources of uncertainty that were ultimately eliminated involved the use of incommensurate sea surface temperature data between models and observations and a tacit interpretation of model projections as predictions or forecasts. After all those sources of uncertainty were eliminated, the most recent research finds little evidence for an unusual slowdown or a discrepancy between models and observations. We discuss whether these different kinds of uncertainty could have been anticipated or managed differently, and how one can apply those lessons to future short-term fluctuations in warming.

  1. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  2. Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux

    NASA Astrophysics Data System (ADS)

    Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.

    2017-12-01

    Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order-of-magnitude reductions. Additionally, sites may require monitoring for a minimum of 5-years in order to sufficiently evaluate remedial performance. The study shows that enhanced anaerobic source zone bioremediation contributed to a modest reduction of source zone contaminant mass discharge and appears to have mitigated rebound of chlorinated ethenes.

  3. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  4. A Semi-implicit Treatment of Porous Media in Steady-State CFD.

    PubMed

    Domaingo, Andreas; Langmayr, Daniel; Somogyi, Bence; Almbauer, Raimund

    There are many situations in computational fluid dynamics which require the definition of source terms in the Navier-Stokes equations. These source terms not only allow to model the physics of interest but also have a strong impact on the reliability, stability, and convergence of the numerics involved. Therefore, sophisticated numerical approaches exist for the description of such source terms. In this paper, we focus on the source terms present in the Navier-Stokes or Euler equations due to porous media-in particular the Darcy-Forchheimer equation. We introduce a method for the numerical treatment of the source term which is independent of the spatial discretization and based on linearization. In this description, the source term is treated in a fully implicit way whereas the other flow variables can be computed in an implicit or explicit manner. This leads to a more robust description in comparison with a fully explicit approach. The method is well suited to be combined with coarse-grid-CFD on Cartesian grids, which makes it especially favorable for accelerated solution of coupled 1D-3D problems. To demonstrate the applicability and robustness of the proposed method, a proof-of-concept example in 1D, as well as more complex examples in 2D and 3D, is presented.

  5. Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.

    PubMed

    Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J

    2017-03-07

    Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.

  6. Predicting vertically-nonsequential wetting patterns with a source-responsive model

    USGS Publications Warehouse

    Nimmo, John R.; Mitchell, Lara

    2013-01-01

    Water infiltrating into soil of natural structure often causes wetting patterns that do not develop in an orderly sequence. Because traditional unsaturated flow models represent a water advance that proceeds sequentially, they fail to predict irregular development of water distribution. In the source-responsive model, a diffuse domain (D) represents flow within soil matrix material following traditional formulations, and a source-responsive domain (S), characterized in terms of the capacity for preferential flow and its degree of activation, represents preferential flow as it responds to changing water-source conditions. In this paper we assume water undergoing rapid source-responsive transport at any particular time is of negligibly small volume; it becomes sensible at the time and depth where domain transfer occurs. A first-order transfer term represents abstraction from the S to the D domain which renders the water sensible. In tests with lab and field data, for some cases the model shows good quantitative agreement, and in all cases it captures the characteristic patterns of wetting that proceed nonsequentially in the vertical direction. In these tests we determined the values of the essential characterizing functions by inverse modeling. These functions relate directly to observable soil characteristics, rendering them amenable to evaluation and improvement through hydropedologic development.

  7. Multi-Decadal Variation of Aerosols: Sources, Transport, and Climate Effects

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Diehl, Thomas; Bian, Huisheng; Streets, David

    2008-01-01

    We present a global model study of multi-decadal changes of atmospheric aerosols and their climate effects using a global chemistry transport model along with the near-term to longterm data records. We focus on a 27-year time period of satellite era from 1980 to 2006, during which a suite of aerosol data from satellite observations, ground-based measurements, and intensive field experiments have become available. We will use the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model, which involves a time-varying, comprehensive global emission dataset that we put together in our previous investigations and will be improved/extended in this project. This global emission dataset includes emissions of aerosols and their precursors from fuel combustion, biomass burning, volcanic eruptions, and other sources from 1980 to the present. Using the model and satellite data, we will analyze (1) the long-term global and regional aerosol trends and their relationship to the changes of aerosol and precursor emissions from anthropogenic and natural sources, (2) the intercontinental source-receptor relationships controlled by emission, transport pathway, and climate variability.

  8. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  9. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  10. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  11. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less

  12. Emergency Preparedness technology support to the Health and Safety Executive (HSE), Nuclear Installations Inspectorate (NII) of the United Kingdom. Appendix A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Kula, K.R.

    1994-03-01

    The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less

  13. Fermi Large Area Telescope Second Source Catalog

    NASA Technical Reports Server (NTRS)

    Nolan, P. L.; Abdo, A. A.; Ackermann, M.; Ajello, M; Allafort, A.; Antolini, E; Bonnell, J.; Cannon, A.; Celik O.; Corbet, R.; hide

    2012-01-01

    We present the second catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24-month period. The Second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurements in 5 energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 11eV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely gamma-ray-producing source classes.

  14. ADAPTATION OF THE ADVANCED STATISTICAL TRAJECTORY REGIONAL AIR POLLUTION (ASTRAP) MODEL TO THE EPA VAX COMPUTER - MODIFICATIONS AND TESTING

    EPA Science Inventory

    The Advanced Statistical Trajectory Regional Air Pollution (ASTRAP) model simulates long-term transport and deposition of oxides of and nitrogen. t is a potential screening tool for assessing long-term effects on regional visibility from sulfur emission sources. owever, a rigorou...

  15. Further development of a global pollution model for CO, CH4, and CH2 O

    NASA Technical Reports Server (NTRS)

    Peters, L. K.

    1975-01-01

    Global tropospheric pollution models are developed that describe the transport and the physical and chemical processes occurring between the principal sources and sinks of CH4 and CO. Results are given of long term static chemical kinetic computer simulations and preliminary short term dynamic simulations.

  16. LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Wu, Hao; Ihme, Matthias

    2015-11-01

    The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.

  17. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  19. Prewhitening of Colored Noise Fields for Detection of Threshold Sources

    DTIC Science & Technology

    1993-11-07

    determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6

  20. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  1. Martian methane plume models for defining Mars rover methane source search strategies

    NASA Astrophysics Data System (ADS)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  2. An efficient and stable hydrodynamic model with novel source term discretization schemes for overland flow and flood simulations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming

    2017-05-01

    Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.

  3. PFLOTRAN-RepoTREND Source Term Comparison Summary.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick, Jennifer M.

    Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.

  4. Extended lattice Boltzmann scheme for droplet combustion.

    PubMed

    Ashna, Mostafa; Rahimian, Mohammad Hassan; Fakhari, Abbas

    2017-05-01

    The available lattice Boltzmann (LB) models for combustion or phase change are focused on either single-phase flow combustion or two-phase flow with evaporation assuming a constant density for both liquid and gas phases. To pave the way towards simulation of spray combustion, we propose a two-phase LB method for modeling combustion of liquid fuel droplets. We develop an LB scheme to model phase change and combustion by taking into account the density variation in the gas phase and accounting for the chemical reaction based on the Cahn-Hilliard free-energy approach. Evaporation of liquid fuel is modeled by adding a source term, which is due to the divergence of the velocity field being nontrivial, in the continuity equation. The low-Mach-number approximation in the governing Navier-Stokes and energy equations is used to incorporate source terms due to heat release from chemical reactions, density variation, and nonluminous radiative heat loss. Additionally, the conservation equation for chemical species is formulated by including a source term due to chemical reaction. To validate the model, we consider the combustion of n-heptane and n-butanol droplets in stagnant air using overall single-step reactions. The diameter history and flame standoff ratio obtained from the proposed LB method are found to be in good agreement with available numerical and experimental data. The present LB scheme is believed to be a promising approach for modeling spray combustion.

  5. Above and beyond short-term mating, long-term mating is uniquely tied to human personality.

    PubMed

    Holtzman, Nicholas S; Strube, Michael J

    2013-12-16

    To what extent are personality traits and sexual strategies linked? The literature does not provide a clear answer, as it is based on the Sociosexuality model, a one-dimensional model that fails to measure long-term mating (LTM). An improved two-dimensional model separately assesses long-term and short-term mating (STM; Jackson and Kirkpatrick, 2007). In this paper, we link this two-dimensional model to an array of personality traits (Big 5, Dark Triad, and Schizoid Personality). We collected data from different sources (targets and peers; Study 1), and from different nations (United States, Study 1; India, Study 2). We demonstrate for the first time that, above and beyond STM, LTM captures variation in personality.

  6. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  7. Treating convection in sequential solvers

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Thakur, Siddharth

    1992-01-01

    The treatment of the convection terms in the sequential solver, a standard procedure found in virtually all pressure based algorithms, to compute the flow problems with sharp gradients and source terms is investigated. Both scalar model problems and one-dimensional gas dynamics equations have been used to study the various issues involved. Different approaches including the use of nonlinear filtering techniques and adoption of TVD type schemes have been investigated. Special treatments of the source terms such as pressure gradients and heat release have also been devised, yielding insight and improved accuracy of the numerical procedure adopted.

  8. Optimization Of Engine Heat Transfer Mechanisms For Ground Combat Vehicle Signature Models

    NASA Astrophysics Data System (ADS)

    Gonda, T.; Rogers, P.; Gerhart, G.; Reynolds, W. R.

    1988-08-01

    A thermodynamic model for predicting the behavior of selected internal thermal sources of an M2 Bradley Infantry Fighting Vehicle is described. The modeling methodology is expressed in terms of first principle heat transfer equations along with a brief history of TACOM's experience with thermal signature modeling techniques. The dynamic operation of the internal thermal sources is presented along with limited test data and an examination of their effect on the vehicle signature.

  9. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  10. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Evaluation of the Hydrologic Source Term from Underground Nuclear Tests on Pahute Mesa at the Nevada Test Site: The CHESHIRE Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawloski, G A; Tompson, A F B; Carle, S F

    The objectives of this report are to develop, summarize, and interpret a series of detailed unclassified simulations that forecast the nature and extent of radionuclide release and near-field migration in groundwater away from the CHESHIRE underground nuclear test at Pahute Mesa at the NTS over 1000 yrs. Collectively, these results are called the CHESHIRE Hydrologic Source Term (HST). The CHESHIRE underground nuclear test was one of 76 underground nuclear tests that were fired below or within 100 m of the water table between 1965 and 1992 in Areas 19 and 20 of the NTS. These areas now comprise the Pahutemore » Mesa Corrective Action Unit (CAU) for which a separate subregional scale flow and transport model is being developed by the UGTA Project to forecast the larger-scale migration of radionuclides from underground tests on Pahute Mesa. The current simulations are being developed, on one hand, to more fully understand the complex coupled processes involved in radionuclide migration, with a specific focus on the CHESHIRE test. While remaining unclassified, they are as site specific as possible and involve a level of modeling detail that is commensurate with the most fundamental processes, conservative assumptions, and representative data sets available. However, the simulation results are also being developed so that they may be simplified and interpreted for use as a source term boundary condition at the CHESHIRE location in the Pahute Mesa CAU model. In addition, the processes of simplification and interpretation will provide generalized insight as to how the source term behavior at other tests may be considered or otherwise represented in the Pahute Mesa CAU model.« less

  12. Short-Term Energy Outlook Model Documentation: Electricity Generation and Fuel Consumption Models

    EIA Publications

    2014-01-01

    The electricity generation and fuel consumption models of the Short-Term Energy Outlook (STEO) model provide forecasts of electricity generation from various types of energy sources and forecasts of the quantities of fossil fuels consumed for power generation. The structure of the electricity industry and the behavior of power generators varies between different areas of the United States. In order to capture these differences, the STEO electricity supply and fuel consumption models are designed to provide forecasts for the four primary Census regions.

  13. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    NASA Astrophysics Data System (ADS)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  14. NASA thesaurus. Volume 3: Definitions

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Publication of NASA Thesaurus definitions began with Supplement 1 to the 1985 NASA Thesaurus. The definitions given here represent the complete file of over 3,200 definitions, complimented by nearly 1,000 use references. Definitions of more common or general scientific terms are given a NASA slant if one exists. Certain terms are not defined as a matter of policy: common names, chemical elements, specific models of computers, and nontechnical terms. The NASA Thesaurus predates by a number of years the systematic effort to define terms, therefore not all Thesaurus terms have been defined. Nevertheless, definitions of older terms are continually being added. The following data are provided for each entry: term in uppercase/lowercase form, definition, source, and year the term (not the definition) was added to the NASA Thesaurus. The NASA History Office is the authority for capitalization in satellite and spacecraft names. Definitions with no source given were constructed by lexicographers at the NASA Scientific and Technical Information (STI) Facility who rely on the following sources for their information: experts in the field, literature searches from the NASA STI database, and specialized references.

  15. Fermi large area telescope second source catalog

    DOE PAGES

    Nolan, P. L.; Abdo, A. A.; Ackermann, M.; ...

    2012-03-28

    Here, we present the second catalog of high-energy γ-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are fluxmore » measurements in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. Furthermore, we provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. Finally, the 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely γ-ray-producing source classes.« less

  16. FERMI LARGE AREA TELESCOPE SECOND SOURCE CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolan, P. L.; Ajello, M.; Allafort, A.

    We present the second catalog of high-energy {gamma}-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurementsmore » in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely {gamma}-ray-producing source classes.« less

  17. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  18. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  19. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  20. Cyclic Evolution of Coronal Fields from a Coupled Dynamo Potential-Field Source-Surface Model.

    PubMed

    Dikpati, Mausumi; Suresh, Akshaya; Burkepile, Joan

    The structure of the Sun's corona varies with the solar-cycle phase, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. It is widely accepted that the large-scale coronal structure is governed by magnetic fields that are most likely generated by dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential-field source-surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation; these dynamo-generated fields are extended from the photosphere to the corona using a potential-field source-surface model. Assuming axisymmetry, we take linear combinations of associated Legendre polynomials that match the more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986 - 1991), we compute the coefficients of the associated Legendre polynomials up to degree eight and compare with observations. We show that at minimum the dipole term dominates, but it fades as the cycle progresses; higher-order multipolar terms begin to dominate. The amplitudes of these terms are not exactly the same for the two limbs, indicating that there is a longitude dependence. While both the 1986 and the 1996 minimum coronas were dipolar, the minimum in 2008 was unusual, since there was a substantial departure from a dipole. We investigate the physical cause of this departure by including a North-South asymmetry in the surface source of the magnetic fields in our flux-transport dynamo model, and find that this asymmetry could be one of the reasons for departure from the dipole in the 2008 minimum.

  1. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.

  2. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  3. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  4. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  5. Next Generation Emission Measurements for Fugitive, Area Source, and Fence Line Applications?

    EPA Science Inventory

    Next generation emissions measurements (NGEM) is an EPA term for the rapidly advancing field of air pollutant sensor technologies, data integration concepts, and associated geospatial modeling strategies for source emissions measurements. Ranging from low coat sensors to satelli...

  6. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    PubMed

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  7. Simulating Cyclic Evolution of Coronal Magnetic Fields using a Potential Field Source Surface Model Coupled with a Dynamo Model

    NASA Astrophysics Data System (ADS)

    Suresh, A.; Dikpati, M.; Burkepile, J.; de Toma, G.

    2013-12-01

    The structure of the Sun's corona varies with solar cycle, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. Why does this pattern occur? It is widely accepted that large-scale coronal structure is governed by magnetic fields, which are most likely generated by the dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential field source surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation and above the photosphere these dynamo-generated fields are extended from the photosphere to the corona by using a potential field source surface model. Under the assumption of axisymmetry, the large-scale poloidal fields can be written in terms of the curl of a vector potential. Since from the photosphere and above the magnetic diffusivity is essentially infinite, the evolution of the vector potential is given by Laplace's Equation, the solution of which is obtained in the form of a first order Associated Legendre Polynomial. By taking linear combinations of these polynomial terms, we find solutions that match more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986-1991), we compute the coefficients of the Associated Legendre Polynomials up to degree eight and compare with observation. We reproduce some previous results that at minimum the dipole term dominates, but that this term fades with the progress of the cycle and higher order multipole terms begin to dominate. We find that the amplitudes of these terms are not exactly the same in the two limbs, indicating that there is some phi dependence. Furthermore, by comparing the solar minimum corona during the past three minima (1986, 1996, and 2008), we find that, while both the 1986 and 1996 minima were dipolar, the minimum in 2008 was unusual, as there was departure from a dipole. In order to investigate the physical cause of this departure from dipole, we implement north-south asymmetry in the surface source of the magnetic fields in our model, and find that such n/s asymmetry in solar cycle could be one of the reasons for this departure. This work is partially supported by NASA's LWS grant with award number NNX08AQ34G. NCAR is sponsored by the NSF.

  8. Do Firms Underinvest in Long-Term Research? Evidence from Cancer Clinical Trials.

    PubMed

    Budish, Eric; Roin, Benjamin N; Williams, Heidi

    2015-07-01

    We investigate whether private research investments are distorted away from long-term projects. Our theoretical model highlights two potential sources of this distortion: short-termism and the fixed patent term. Our empirical context is cancer research, where clinical trials--and hence, project durations--are shorter for late-stage cancer treatments relative to early-stage treatments or cancer prevention. Using newly constructed data, we document several sources of evidence that together show private research investments are distorted away from long-term projects. The value of life-years at stake appears large. We analyze three potential policy responses: surrogate (non-mortality) clinical-trial endpoints, targeted R&D subsidies, and patent design.

  9. Do firms underinvest in long-term research? Evidence from cancer clinical trials

    PubMed Central

    Budish, Eric; Roin, Benjamin N.

    2015-01-01

    We investigate whether private research investments are distorted away from long-term projects. Our theoretical model highlights two potential sources of this distortion: short-termism and the fixed patent term. Our empirical context is cancer research, where clinical trials – and hence, project durations – are shorter for late-stage cancer treatments relative to early-stage treatments or cancer prevention. Using newly constructed data, we document several sources of evidence that together show private research investments are distorted away from long-term projects. The value of life-years at stake appears large. We analyze three potential policy responses: surrogate (non-mortality) clinicaltrial endpoints, targeted R&D subsidies, and patent design. PMID:26345455

  10. Do firms underinvest in long-term research? Evidence from cancer clinical trials.

    PubMed

    Budish, Eric; Roin, Benjamin N; Williams, Heidi

    2015-07-01

    We investigate whether private research investments are distorted away from long-term projects. Our theoretical model highlights two potential sources of this distortion: short-termism and the fixed patent term. Our empirical context is cancer research, where clinical trials - and hence, project durations - are shorter for late-stage cancer treatments relative to early-stage treatments or cancer prevention. Using newly constructed data, we document several sources of evidence that together show private research investments are distorted away from long-term projects. The value of life-years at stake appears large. We analyze three potential policy responses: surrogate (non-mortality) clinicaltrial endpoints, targeted R&D subsidies, and patent design.

  11. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  12. Plasma particle sources due to interactions with neutrals in a turbulent scrape-off layer of a toroidally confined plasma

    NASA Astrophysics Data System (ADS)

    Thrysøe, A. S.; Løiten, M.; Madsen, J.; Naulin, V.; Nielsen, A. H.; Rasmussen, J. Juul

    2018-03-01

    The conditions in the edge and scrape-off layer (SOL) of magnetically confined plasmas determine the overall performance of the device, and it is of great importance to study and understand the mechanics that drive transport in those regions. If a significant amount of neutral molecules and atoms is present in the edge and SOL regions, those will influence the plasma parameters and thus the plasma confinement. In this paper, it is displayed how neutrals, described by a fluid model, introduce source terms in a plasma drift-fluid model due to inelastic collisions. The resulting source terms are included in a four-field drift-fluid model, and it is shown how an increasing neutral particle density in the edge and SOL regions influences the plasma particle transport across the last-closed-flux-surface. It is found that an appropriate gas puffing rate allows for the edge density in the simulation to be self-consistently maintained due to ionization of neutrals in the confined region.

  13. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  14. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  15. Seasonally-Dynamic SPARROW Modeling of Nitrogen Flux Using Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A. B.; Moore, R. B.; Shih, J.; Nolin, A. W.; Macauley, M.; Alexander, R. B.

    2013-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe dynamically calibrated SPARROW models of total nitrogen flux in three sub-regional watersheds: the Potomac River Basin, Long Island Sound drainage, and coastal South Carolina drainage. The models are based on seasonal water quality and watershed input data for a total 170 monitoring stations for the period 2001 to 2008. Frequently-reported, spatially-detailed input data on the phenology of agricultural production, terrestrial vegetation growth, and snow melt are often challenging requirements of seasonal modeling of reactive nitrogen. In this NASA-funded research, we use Enhanced Vegetation Index (EVI), gross primary production and snow/ice cover data from MODIS to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The spatial reference frames of the models are 1:100,000-scale stream networks, and the computational time steps are 0.25-year seasons. Precipitation and temperature data are from PRISM. The model formulation accounts for storage of nitrogen from nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Once calibrated, model source terms based on previous season export allow for recursive dynamic simulation of stream flux: gradual increases or decreases in export occur as source supply rates and hydrologic forcing change. Based on an assumption that removal of nitrogen from watershed storage to stream channels and to 'permanent' sinks (e.g. the atmosphere and deep groundwater) occur as parallel first-order processes, the models can be used to estimate the approximate residence times of nonpoint source nitrogen in the watersheds.

  16. The MV model of the color glass condensate for a finite number of sources including Coulomb interactions

    DOE PAGES

    McLerran, Larry; Skokov, Vladimir V.

    2016-09-19

    We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.

  17. Multi-decadal Dynamics of Mercury in a Complex Ecosystem

    NASA Astrophysics Data System (ADS)

    Levin, L.

    2016-12-01

    A suite of air quality and watershed models was applied to track the ecosystem contributions of mercury (Hg), as well as arsenic (As), and selenium (Se) from local and global sources to the San Juan River basin in the Four Corners region of the American Southwest. Long-term changes in surface water and fish tissue mercury concentrations were also simulated, out to the year 2074.Atmospheric mercury was modeled using a nested, spatial-scale modeling system comprising GEOS-Chem (global scale) and CMAQ-APT (national and regional) models. Four emission scenarios were modeled, including two growth scenarios for Asian mercury emissions. Results showed that the average mercury deposition over the San Juan basin was 21 µg/m2-y. Source contributions to mercury deposition range from 2% to 9% of total deposition prior to post-2016 U.S. controls for air toxics regulatory compliance. Most of the contributions to mercury deposition in the basin are from non-U.S. sources. Watershed simulations showed power plant contributions to fish tissue mercury never exceeded 0.035% during the 85-year model simulation period, even with the long-term growth in fish tissue mercury over that period. Local coal-fired power plants contributed relatively small fractions to mercury deposition (less than 4%) in the basin; background and non-U.S. anthropogenic sources dominated. Fish-tissue mercury levels are projected to increase through 2074 due to growth projections for non-U.S. emission sources. The most important contributor to methylmercury in the lower reaches of the watershed was advection of MeHg produced in situ at upstream headwater locations.

  18. A discontinuous Galerkin approach for conservative modeling of fully nonlinear and weakly dispersive wave transformations

    NASA Astrophysics Data System (ADS)

    Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef

    2018-05-01

    This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.

  19. Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods

    NASA Astrophysics Data System (ADS)

    Haworth, Daniel

    2013-11-01

    The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.

  20. EXCITATION OF A BURIED MAGMATIC PIPE: A SEISMIC SOURCE MODEL FOR VOLCANIC TREMOR.

    USGS Publications Warehouse

    Chouet, Bernard

    1985-01-01

    A model of volcanic tremor is presented in which the modes of vibration of a volcanic pipe are excited by the motion of the fluid within the pipe in response to a short-term perturbation in pressure. The model shows the relative importance of the various parts constituting this composite source in the radiated elastic field at near and intermediate distances. The paper starts with the presentation of the elastic field radiated by the source, and proceeds with an analysis of the energy balance between hydraulic and elastic motions. Next, the hydraulic excitation of the source is addressed and, finally, the ground response to this excitation is analyzed in the simple case of a pipe buried in a homogeneous half space.

  1. Modeling individual differences in working memory performance: a source activation account

    PubMed Central

    Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.

    2008-01-01

    Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561

  2. Spent fuel radionuclide source-term model for assessing spent fuel performance in geological disposal. Part I: Assessment of the instant release fraction

    NASA Astrophysics Data System (ADS)

    Johnson, Lawrence; Ferry, Cécile; Poinssot, Christophe; Lovera, Patrick

    2005-11-01

    A source-term model for the short-term release of radionuclides from spent nuclear fuel (SNF) has been developed. It provides quantitative estimates of the fraction of various radionuclides that are expected to be released rapidly (the instant release fraction, or IRF) when water contacts the UO 2 or MOX fuel after container breaching in a geological repository. The estimates are based on correlation of leaching data for radionuclides with fuel burnup and fission gas release. Extrapolation of the data to higher fuel burnup values is based on examination of data on fuel restructuring, such as rim development, and on fission gas release data, which permits bounding IRF values to be estimated assuming that radionuclide releases will be less than fission gas release. The consideration of long-term solid-state changes influencing the IRF prior to canister breaching is addressed by evaluating alpha self-irradiation enhanced diffusion, which may gradually increase the accumulation of fission products at grain boundaries.

  3. EVALUATION OF ALTERNATIVE GAUSSIAN PLUME DISPERSION MODELING TECHNIQUES IN ESTIMATING SHORT-TERM SULFUR DIOXIDE CONCENTRATIONS

    EPA Science Inventory

    A routinely applied atmospheric dispersion model was modified to evaluate alternative modeling techniques which allowed for more detailed source data, onsite meteorological data, and several dispersion methodologies. These were evaluated with hourly SO2 concentrations measured at...

  4. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  5. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  6. Post-disaster supply chain interdependent critical infrastructure system restoration: A review of data necessary and available for modeling

    USGS Publications Warehouse

    Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Hector J.

    2016-01-01

    The majority of restoration strategies in the wake of large-scale disasters have focused on short-term emergency response solutions. Few consider medium- to long-term restoration strategies to reconnect urban areas to national supply chain interdependent critical infrastructure systems (SCICI). These SCICI promote the effective flow of goods, services, and information vital to the economic vitality of an urban environment. To re-establish the connectivity that has been broken during a disaster between the different SCICI, relationships between these systems must be identified, formulated, and added to a common framework to form a system-level restoration plan. To accomplish this goal, a considerable collection of SCICI data is necessary. The aim of this paper is to review what data are required for model construction, the accessibility of these data, and their integration with each other. While a review of publically available data reveals a dearth of real-time data to assist modeling long-term recovery following an extreme event, a significant amount of static data does exist and these data can be used to model the complex interdependencies needed. For the sake of illustration, a particular SCICI (transportation) is used to highlight the challenges of determining the interdependencies and creating models capable of describing the complexity of an urban environment with the data publically available. Integration of such data as is derived from public domain sources is readily achieved in a geospatial environment, after all geospatial infrastructure data are the most abundant data source and while significant quantities of data can be acquired through public sources, a significant effort is still required to gather, develop, and integrate these data from multiple sources to build a complete model. Therefore, while continued availability of high quality, public information is essential for modeling efforts in academic as well as government communities, a more streamlined approach to a real-time acquisition and integration of these data is essential.

  7. ISS Ambient Air Quality: Updated Inventory of Known Aerosol Sources

    NASA Technical Reports Server (NTRS)

    Meyer, Marit

    2014-01-01

    Spacecraft cabin air quality is of fundamental importance to crew health, with concerns encompassing both gaseous contaminants and particulate matter. Little opportunity exists for direct measurement of aerosol concentrations on the International Space Station (ISS), however, an aerosol source model was developed for the purpose of filtration and ventilation systems design. This model has successfully been applied, however, since the initial effort, an increase in the number of crewmembers from 3 to 6 and new processes on board the ISS necessitate an updated aerosol inventory to accurately reflect the current ambient aerosol conditions. Results from recent analyses of dust samples from ISS, combined with a literature review provide new predicted aerosol emission rates in terms of size-segregated mass and number concentration. Some new aerosol sources have been considered and added to the existing array of materials. The goal of this work is to provide updated filtration model inputs which can verify that the current ISS filtration system is adequate and filter lifetime targets are met. This inventory of aerosol sources is applicable to other spacecraft, and becomes more important as NASA considers future long term exploration missions, which will preclude the opportunity for resupply of filtration products.

  8. Seismic hazard assessment of the Province of Murcia (SE Spain): analysis of source contribution to hazard

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.

    2007-10-01

    A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.

  9. A general circulation model study of atmospheric carbon monoxide

    NASA Technical Reports Server (NTRS)

    Pinto, J. P.; Rind, D.; Russell, G. L.; Lerner, J. A.; Hansen, J. E.; Yung, Y. L.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 to the 15th g/yr, in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 750,000/cu cm. Models that calculate globally averaged OH concentrations much lower than this nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources.

  10. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2012-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.

  11. A review of methods for predicting air pollution dispersion

    NASA Technical Reports Server (NTRS)

    Mathis, J. J., Jr.; Grose, W. L.

    1973-01-01

    Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.

  12. DESIGN OF AQUIFER REMEDIATION SYSTEMS: (2) Estimating site-specific performance and benefits of partial source removal

    EPA Science Inventory

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...

  13. Examining Long-Term Trends in Mobile Source Related Pollutants through Analysis of Emissions, Observations and Model Simulations

    EPA Science Inventory

    Anthropogenic emissions from a variety of sectors including mobile sources have decreased substantially over the past decades despite continued growth in population and economic activity. In this study, we analyze 1990-2010 trends in emission inventories, ambient observations and...

  14. Three-Dimensional Model Synthesis of the Global Methane Cycle

    NASA Technical Reports Server (NTRS)

    Fung, I.; Prather, M.; John, J.; Lerner, J.; Matthews, E.

    1991-01-01

    A synthesis of the global methane cycle is presented to attempt to generate an accurate global methane budget. Methane-flux measurements, energy data, and agricultural statistics are merged with databases of land-surface characteristics and anthropogenic activities. The sources and sinks of methane are estimated based on atmospheric methane composition and variations, and a global 3D transport model simulates the corresponding atmospheric responses. The geographic and seasonal variations of candidate budgets are compared with observational data, and the available observations are used to constrain the plausible methane budgets. The preferred budget includes annual destruction rates and annual emissions for various sources. The lack of direct flux measurements in the regions of many of these fluxes makes the unique determination of each term impossible. OH oxidation is found to be the largest single term, although more measurements of this and other terms are recommended.

  15. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  16. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  17. Advanced relativistic VLBI model for geodesy

    NASA Astrophysics Data System (ADS)

    Soffel, Michael; Kopeikin, Sergei; Han, Wen-Biao

    2017-07-01

    Our present relativistic part of the geodetic VLBI model for Earthbound antennas is a consensus model which is considered as a standard for processing high-precision VLBI observations. It was created as a compromise between a variety of relativistic VLBI models proposed by different authors as documented in the IERS Conventions 2010. The accuracy of the consensus model is in the picosecond range for the group delay but this is not sufficient for current geodetic purposes. This paper provides a fully documented derivation of a new relativistic model having an accuracy substantially higher than one picosecond and based upon a well accepted formalism of relativistic celestial mechanics, astrometry and geodesy. Our new model fully confirms the consensus model at the picosecond level and in several respects goes to a great extent beyond it. More specifically, terms related to the acceleration of the geocenter are considered and kept in the model, the gravitational time-delay due to a massive body (planet, Sun, etc.) with arbitrary mass and spin-multipole moments is derived taking into account the motion of the body, and a new formalism for the time-delay problem of radio sources located at finite distance from VLBI stations is presented. Thus, the paper presents a substantially elaborated theoretical justification of the consensus model and its significant extension that allows researchers to make concrete estimates of the magnitude of residual terms of this model for any conceivable configuration of the source of light, massive bodies, and VLBI stations. The largest terms in the relativistic time delay which can affect the current VLBI observations are from the quadrupole and the angular momentum of the gravitating bodies that are known from the literature. These terms should be included in the new geodetic VLBI model for improving its consistency.

  18. Numerical study of supersonic combustion using a finite rate chemistry model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, T.; Tiwari, S. N.; Kumar, A.; Drummond, J. P.

    1986-01-01

    The governing equations of two-dimensional chemically reacting flows are presented together with a global two-step chemistry model for H2-air combustion. The explicit unsplit MacCormack finite difference algorithm is used to advance the discrete system of the governing equations in time until convergence is attained. The source terms in the species equations are evaluated implicitly to alleviate stiffness associated with fast reactions. With implicit source terms, the species equations give rise to a block-diagonal system which can be solved very efficiently on vector-processing computers. A supersonic reacting flow in an inlet-combustor configuration is calculated for the case where H2 is injected into the flow from the side walls and the strut. Results of the calculation are compared against the results obtained by using a complete reaction model.

  19. Comparison of the landslide susceptibility models in Taipei Water Source Domain, Taiwan

    NASA Astrophysics Data System (ADS)

    WU, C. Y.; Yeh, Y. C.; Chou, T. H.

    2017-12-01

    Taipei Water Source Domain, locating at the southeast of Taipei Metropolis, is the main source of water resource in this region. Recently, the downstream turbidity often soared significantly during the typhoon period because of the upstream landslides. The landslide susceptibilities should be analysed to assess the influence zones caused by different rainfall events, and to ensure the abilities of this domain to serve enough and quality water resource. Generally, the landslide susceptibility models can be established based on either a long-term landslide inventory or a specified landslide event. Sometimes, there is no long-term landslide inventory in some areas. Thus, the event-based landslide susceptibility models are established widely. However, the inventory-based and event-based landslide susceptibility models may result in dissimilar susceptibility maps in the same area. So the purposes of this study were to compare the landslide susceptibility maps derived from the inventory-based and event-based models, and to interpret how to select a representative event to be included in the susceptibility model. The landslide inventory from Typhoon Tim in July, 1994 and Typhoon Soudelor in August, 2015 was collected, and used to establish the inventory-based landslide susceptibility model. The landslides caused by Typhoon Nari and rainfall data were used to establish the event-based model. The results indicated the high susceptibility slope-units were located at middle upstream Nan-Shih Stream basin.

  20. Geochemistry of amphibolites from the Kolar Schist Belt

    NASA Technical Reports Server (NTRS)

    Balakrishnan, S.; Hanson, G. N.; Rajamani, V.

    1988-01-01

    How the Nd isotope data suggest that the amphibolites from the schist belt were derived from long-term depleted mantle sources at about 2.7 Ga is described. Trace element and Pb isotope data from the amphibolites also suggest that the sources for the amphibolites on the western and eastern sides of the narrow schist belt were derived from different sources. The Pb data from one outcrop of the central tholeiitic amphibolites lies on a 2.7 Ga isochron with a low model. The other amphibolites (W komatiitic, E komatiitic, and E tholeiitic) do not define isochrons, but suggest that they were derived from sources with distinct histories of U/Pb. There is some suggestion that the E komatiitic amphibolites may have been contaminated by fluids carrying Pb from a long-term, high U/Pb source, such as the old granitic crust on the west side of the schist belt. This is consistent with published galena Pb isotope data from the ore lodes within the belt, which also show a history of long-term U/Pb enrichment.

  1. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  2. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  3. Integrating nursing diagnostic concepts into the medical entities dictionary using the ISO Reference Terminology Model for Nursing Diagnosis.

    PubMed

    Hwang, Jee-In; Cimino, James J; Bakken, Suzanne

    2003-01-01

    The purposes of the study were (1) to evaluate the usefulness of the International Standards Organization (ISO) Reference Terminology Model for Nursing Diagnoses as a terminology model for defining nursing diagnostic concepts in the Medical Entities Dictionary (MED) and (2) to create the additional hierarchical structures required for integration of nursing diagnostic concepts into the MED. The authors dissected nursing diagnostic terms from two source terminologies (Home Health Care Classification and the Omaha System) into the semantic categories of the ISO model. Consistent with the ISO model, they selected Focus and Judgment as required semantic categories for creating intensional definitions of nursing diagnostic concepts in the MED. Because the MED does not include Focus and Judgment hierarchies, the authors developed them to define the nursing diagnostic concepts. The ISO model was sufficient for dissecting the source terminologies into atomic terms. The authors identified 162 unique focus concepts from the 266 nursing diagnosis terms for inclusion in the Focus hierarchy. For the Judgment hierarchy, the authors precoordinated Judgment and Potentiality instead of using Potentiality as a qualifier of Judgment as in the ISO model. Impairment and Alteration were the most frequently occurring judgments. Nursing care represents a large proportion of health care activities; thus, it is vital that terms used by nurses are integrated into concept-oriented terminologies that provide broad coverage for the domain of health care. This study supports the utility of the ISO Reference Terminology Model for Nursing Diagnoses as a facilitator for the integration process.

  4. Integrating Nursing Diagnostic Concepts into the Medical Entities Dictionary Using the ISO Reference Terminology Model for Nursing Diagnosis

    PubMed Central

    Hwang, Jee-In; Cimino, James J.; Bakken, Suzanne

    2003-01-01

    Objective: The purposes of the study were (1) to evaluate the usefulness of the International Standards Organization (ISO) Reference Terminology Model for Nursing Diagnoses as a terminology model for defining nursing diagnostic concepts in the Medical Entities Dictionary (MED) and (2) to create the additional hierarchical structures required for integration of nursing diagnostic concepts into the MED. Design and Measurements: The authors dissected nursing diagnostic terms from two source terminologies (Home Health Care Classification and the Omaha System) into the semantic categories of the ISO model. Consistent with the ISO model, they selected Focus and Judgment as required semantic categories for creating intensional definitions of nursing diagnostic concepts in the MED. Because the MED does not include Focus and Judgment hierarchies, the authors developed them to define the nursing diagnostic concepts. Results: The ISO model was sufficient for dissecting the source terminologies into atomic terms. The authors identified 162 unique focus concepts from the 266 nursing diagnosis terms for inclusion in the Focus hierarchy. For the Judgment hierarchy, the authors precoordinated Judgment and Potentiality instead of using Potentiality as a qualifier of Judgment as in the ISO model. Impairment and Alteration were the most frequently occurring judgments. Conclusions: Nursing care represents a large proportion of health care activities; thus, it is vital that terms used by nurses are integrated into concept-oriented terminologies that provide broad coverage for the domain of health care. This study supports the utility of the ISO Reference Terminology Model for Nursing Diagnoses as a facilitator for the integration process. PMID:12668692

  5. DEVELOPMENT AND VALIDATION OF AN AIR-TO-BEEF FOOD CHAIN MODEL FOR DIOXIN-LIKE COMPOUNDS

    EPA Science Inventory

    A model for predicting concentrations of dioxin-like compounds in beef is developed and tested. The key premise of the model is that concentrations of these compounds in air are the source term, or starting point, for estimating beef concentrations. Vapor-phase concentrations t...

  6. Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Nathan C.; Gauntt, Randall O.

    Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less

  7. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  8. Oxygenate Supply/Demand Balances in the Short-Term Integrated Forecasting Model (Short-Term Energy Outlook Supplement March 1998)

    EIA Publications

    1998-01-01

    The blending of oxygenates, such as fuel ethanol and methyl tertiary butyl ether (MTBE), into motor gasoline has increased dramatically in the last few years because of the oxygenated and reformulated gasoline programs. Because of the significant role oxygenates now have in petroleum product markets, the Short-Term Integrated Forecasting System (STIFS) was revised to include supply and demand balances for fuel ethanol and MTBE. The STIFS model is used for producing forecasts in the Short-Term Energy Outlook. A review of the historical data sources and forecasting methodology for oxygenate production, imports, inventories, and demand is presented in this report.

  9. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less

  10. Simultaneous event-specific estimates of transport, loss, and source rates for relativistic outer radiation belt electrons: Event-Specific 1-D Modeling

    DOE PAGES

    Schiller, Q.; Tu, W.; Ali, A. F.; ...

    2017-03-11

    The most significant unknown regarding relativistic electrons in Earth’s outer Van Allen radiation belt is the relative contribution of loss, transport, and acceleration processes within the inner magnetosphere. Detangling each individual process is critical to improve the understanding of radiation belt dynamics, but determining a single component is challenging due to sparse measurements in diverse spatial and temporal regimes. However, there are currently an unprecedented number of spacecraft taking measurements that sample different regions of the inner magnetosphere. With the increasing number of varied observational platforms, system dynamics can begin to be unraveled. In this work, we employ in-situ measurementsmore » during the 13-14 January 2013 enhancement event to isolate transport, loss, and source dynamics in a one dimensional radial diffusion model. We then validate the results by comparing them to Van Allen Probes and THEMIS observations, indicating that the three terms have been accurately and individually quantified for the event. Finally, a direct comparison is performed between the model containing event-specific terms and various models containing terms parameterized by geomagnetic index. Models using a simple 3/Kp loss timescale show deviation from the event specific model of nearly two orders of magnitude within 72 hours of the enhancement event. However, models using alternative loss timescales closely resemble the event specific model.« less

  11. Simultaneous event-specific estimates of transport, loss, and source rates for relativistic outer radiation belt electrons: Event-Specific 1-D Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiller, Q.; Tu, W.; Ali, A. F.

    The most significant unknown regarding relativistic electrons in Earth’s outer Van Allen radiation belt is the relative contribution of loss, transport, and acceleration processes within the inner magnetosphere. Detangling each individual process is critical to improve the understanding of radiation belt dynamics, but determining a single component is challenging due to sparse measurements in diverse spatial and temporal regimes. However, there are currently an unprecedented number of spacecraft taking measurements that sample different regions of the inner magnetosphere. With the increasing number of varied observational platforms, system dynamics can begin to be unraveled. In this work, we employ in-situ measurementsmore » during the 13-14 January 2013 enhancement event to isolate transport, loss, and source dynamics in a one dimensional radial diffusion model. We then validate the results by comparing them to Van Allen Probes and THEMIS observations, indicating that the three terms have been accurately and individually quantified for the event. Finally, a direct comparison is performed between the model containing event-specific terms and various models containing terms parameterized by geomagnetic index. Models using a simple 3/Kp loss timescale show deviation from the event specific model of nearly two orders of magnitude within 72 hours of the enhancement event. However, models using alternative loss timescales closely resemble the event specific model.« less

  12. Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods

    NASA Astrophysics Data System (ADS)

    Murillo, J.; García-Navarro, P.

    2012-02-01

    In this work, the source term discretization in hyperbolic conservation laws with source terms is considered using an approximate augmented Riemann solver. The technique is applied to the shallow water equations with bed slope and friction terms with the focus on the friction discretization. The augmented Roe approximate Riemann solver provides a family of weak solutions for the shallow water equations, that are the basis of the upwind treatment of the source term. This has proved successful to explain and to avoid the appearance of instabilities and negative values of the thickness of the water layer in cases of variable bottom topography. Here, this strategy is extended to capture the peculiarities that may arise when defining more ambitious scenarios, that may include relevant stresses in cases of mud/debris flow. The conclusions of this analysis lead to the definition of an accurate and robust first order finite volume scheme, able to handle correctly transient problems considering frictional stresses in both clean water and debris flow, including in this last case a correct modelling of stopping conditions.

  13. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  14. Evaluation of stormwater micropollutant source control and end-of-pipe control strategies using an uncertainty-calibrated integrated dynamic simulation model.

    PubMed

    Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S

    2015-03-15

    The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.

    NASA Astrophysics Data System (ADS)

    di Luccio, F.; Pino, A.; Thio, H.

    2002-12-01

    We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.

  16. Maintaining activity engagement: individual differences in the process of self-regulating motivation.

    PubMed

    Sansone, Carol; Thoman, Dustin B

    2006-12-01

    Typically, models of self-regulation include motivation in terms of goals. Motivation is proposed to differ among individuals as a consequence of the goals they hold as well as how much they value those goals and expect to attain them. We suggest that goal-defined motivation is only one source of motivation critical for sustained engagement. A second source is the motivation that arises from the degree of interest experienced in the process of goal pursuit. Our model integrates both sources of motivation within the goal-striving process and suggests that individuals may actively monitor and regulate them. Conceptualizing motivation in terms of a self-regulatory process provides an organizing framework for understanding how individuals might differ in whether they experience interest while working toward goals, whether they persist without interest, and whether and how they try to create interest. We first present the self-regulation of motivation model and then review research illustrating how the consideration of individual differences at different points in the process allows a better understanding of variability in people's choices, efforts, and persistence over time.

  17. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  18. Characterizing SRAM Single Event Upset in Terms of Single and Double Node Charge Collection

    NASA Technical Reports Server (NTRS)

    Black, J. D.; Ball, D. R., II; Robinson, W. H.; Fleetwood, D. M.; Schrimpf, R. D.; Reed, R. A.; Black, D. A.; Warren, K. M.; Tipton, A. D.; Dodd, P. E.; hide

    2008-01-01

    A well-collapse source-injection mode for SRAM SEU is demonstrated through TCAD modeling. The recovery of the SRAM s state is shown to be based upon the resistive path from the p+-sources in the SRAM to the well. Multiple cell upset patterns for direct charge collection and the well-collapse source-injection mechanisms are then predicted and compared to recent SRAM test data.

  19. Modeling long-term trends of chlorinated ethene contamination at a public supply well

    USGS Publications Warehouse

    Chapelle, Francis H.; Kauffman, Leon J.; Widdowson, Mark A.

    2015-01-01

    A mass-balance solute-transport modeling approach was used to investigate the effects of dense nonaqueous phase liquid (DNAPL) volume, composition, and generation of daughter products on simulated and measured long-term trends of chlorinated ethene (CE) concentrations at a public supply well. The model was built by telescoping a calibrated regional three-dimensional MODFLOW model to the capture zone of a public supply well that has a history of CE contamination. The local model was then used to simulate the interactions between naturally occurring organic carbon that acts as an electron donor, and dissolved oxygen (DO), CEs, ferric iron, and sulfate that act as electron acceptors using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. The modeling results indicate that asymmetry between rapidly rising and more gradual falling concentration trends over time suggests a DNAPL rather than a dissolved source of CEs. Peak concentrations of CEs are proportional to the volume and composition of the DNAPL source. The persistence of contamination, which can vary from a few years to centuries, is proportional to DNAPL volume, but is unaffected by DNAPL composition. These results show that monitoring CE concentrations in raw water produced by impacted public supply wells over time can provide useful information concerning the nature of contaminant sources and the likely future persistence of contamination.

  20. Aerosols in the Atmosphere: Sources, Transport, and Multi-decadal Trends

    NASA Technical Reports Server (NTRS)

    Chin, M.; Diehl, T.; Bian, H.; Kucsera, T.

    2016-01-01

    We present our recent studies with global modeling and analysis of atmospheric aerosols. We have used the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model and satellite and in situ data to investigate (1) long-term variations of aerosols over polluted and dust source regions and downwind ocean areas in the past three decades and the cause of the changes and (2) anthropogenic and volcanic contributions to the sulfate aerosol in the upper tropospherelower stratosphere.

  1. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    PubMed Central

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2012-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems. The model accommodates the requirement for measurable physical agents in the remedy (source nanoparticles and/or source adsorbed to silica nanoparticles). Hormetic adaptive responses in the organism, triggered by nanoparticles; bipolar, metaplastic change, dependent on the history of the organism. Clinical matching of the patient’s symptom picture, including modalities, to the symptom pattern that the source material can cause (cross-adaptation and cross-sensitization). Evidence for nanoparticle-related quantum macro-entanglement in homeopathic pathogenetic trials. This paper examines research implications of the model, discussing the following hypotheses: Variability in nanoparticle size, morphology, and aggregation affects remedy properties and reproducibility of findings. Homeopathic remedies modulate adaptive allostatic responses, with multiple dynamic short- and long-term effects. Simillimum remedy nanoparticles, as novel mild stressors corresponding to the organism’s dysfunction initiate time-dependent cross-sensitization, reversing the direction of dysfunctional reactivity to environmental stressors. The NPCAS model suggests a way forward for systematic research on homeopathy. The central proposition is that homeopathic treatment is a form of nanomedicine acting by modulation of endogenous adaptation and metaplastic amplification processes in the organism to enhance long-term systemic resilience and health. PMID:23290882

  2. Coral proxy record of decadal-scale reduction in base flow from Moloka'i, Hawaii

    USGS Publications Warehouse

    Prouty, Nancy G.; Jupiter, Stacy D.; Field, Michael E.; McCulloch, Malcolm T.

    2009-01-01

    Groundwater is a major resource in Hawaii and is the principal source of water for municipal, agricultural, and industrial use. With a growing population, a long-term downward trend in rainfall, and the need for proper groundwater management, a better understanding of the hydroclimatological system is essential. Proxy records from corals can supplement long-term observational networks, offering an accessible source of hydrologic and climate information. To develop a qualitative proxy for historic groundwater discharge to coastal waters, a suite of rare earth elements and yttrium (REYs) were analyzed from coral cores collected along the south shore of Moloka'i, Hawaii. The coral REY to calcium (Ca) ratios were evaluated against hydrological parameters, yielding the strongest relationship to base flow. Dissolution of REYs from labradorite and olivine in the basaltic rock aquifers is likely the primary source of coastal ocean REYs. There was a statistically significant downward trend (−40%) in subannually resolved REY/Ca ratios over the last century. This is consistent with long-term records of stream discharge from Moloka'i, which imply a downward trend in base flow since 1913. A decrease in base flow is observed statewide, consistent with the long-term downward trend in annual rainfall over much of the state. With greater demands on freshwater resources, it is appropriate for withdrawal scenarios to consider long-term trends and short-term climate variability. It is possible that coral paleohydrological records can be used to conduct model-data comparisons in groundwater flow models used to simulate changes in groundwater level and coastal discharge.

  3. Processing the Army’s Wartime Replacements: The Preferred CONUS Replacement Center Concept.

    DTIC Science & Technology

    1987-12-01

    Replacement System. The first model, the macro model, was a network flow model which was used to analyze the flow of replacements from their source through...individual CRCs. Through the analysis of the macro model, recommendations were made on how the CRC system should be configured in terms of size, location

  4. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  5. Short-term Wind Forecasting at Wind Farms using WRF-LES and Actuator Disk Model

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan

    2017-04-01

    Short-term wind forecasts are obtained for a wind farm on a mountainous terrain using WRF-LES. Multi-scale simulations are also performed using different PBL parameterizations. Turbines are parameterized using Actuator Disc Model. LES models improved the forecasts. Statistical error analysis is performed and ramp events are analyzed. Complex topography of the study area affects model performance, especially the accuracy of wind forecasts were poor for cross valley-mountain flows. By means of LES, we gain new knowledge about the sources of spatial and temporal variability of wind fluctuations such as the configuration of wind turbines.

  6. Updating source term and atmospheric dispersion simulations for the dose reconstruction in Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku

    2017-09-01

    In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.

  7. Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars

    NASA Astrophysics Data System (ADS)

    Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan

    2013-04-01

    The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.

  8. Binary Source Microlensing Event OGLE-2016-BLG-0733: Interpretation of a Long-Term Asymmetric Perturbation

    NASA Technical Reports Server (NTRS)

    Jung, Y. K.; Udalski, A.; Yee, J. C.; Sumi, T.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Bennett, D. P.; Suzuki, D.

    2017-01-01

    In the process of analyzing an observed light curve, one often confronts various scenarios that can mimic the planetary signals causing difficulties in the accurate interpretation of the lens system. In this paper, we present the analysis of the microlensing event OGLE-2016-BLG-0733. The light curve of the event shows a long-term asymmetric perturbation that would appear to be due to a planet. From the detailed modeling of the lensing light curve, however, we find that the perturbation originates from the binarity of the source rather than the lens. This result demonstrates that binary sources with roughly equal-luminosity components can mimic long-term perturbations induced by planets with projected separations near the Einstein ring. The result also represents the importance of the consideration of various interpretations in planet-like perturbations and of high-cadence observations for ensuring the unambiguous detection of the planet.

  9. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less

  10. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  11. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  12. Part 2 of a Computational Study of a Drop-Laden Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2004-01-01

    This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.

  13. The Application of Function Points to Predict Source Lines of Code for Software Development

    DTIC Science & Technology

    1992-09-01

    there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available

  14. Computations of steady-state and transient premixed turbulent flames using pdf methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulek, T.; Lindstedt, R.P.

    1996-03-01

    Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less

  15. A Chandra X-Ray Study of NGC 1068 IL the Luminous X-Ray Source Population

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Wilson, Andrew S.

    2003-01-01

    We present an analysis of the compact X-ray source population in the Seyfert 2 galaxy NGC 1068, imaged with a approx. 50 ks Chandra observation. We find a total of 84 compact sources on the S3 chip, of which 66 are located within the 25.0 B-mag/arcsec isophote of the galactic disk of NGC 1068. Spectra have been obtained for the 21 sources with at least 50 counts and modeled with both multicolor disk blackbody and power-law models. The power-law model provides the better description of the spectrum for 18 of these sources. For fainter sources, the spectral index has been estimated from the hardness ratio. Five sources have 0.4 - 8 keV intrinsic luminosities greater than 10(exp 39)ergs/ s, assuming that their emission is isotropic and that they are associated with NGC 1068. We refer to these sources as intermediate-luminosity X-ray objects (ISOs). If these five sources are X-ray binaries accreting with luminosities that are both sub-Eddington and isotropic, then the implied source masses are approx greater than 7 solar mass, and so they are inferred to be black holes. Most of the spectrally modeled sources have spectral shapes similar to Galactic black hole candidates. However, the brightest compact source in NGC 1068 has a spectrum that is much harder than that found in Galactic black hole candidates and other ISOs. The brightest source also shows large amplitude variability on both short-term and long-term timescales, with the count rate possibly decreasing by a factor of 2 in approx. 2 ks during our Chundra observation, and the source flux decreasing by a factor of 5 between our observation and the grating observations taken just over 9 months later. The ratio of the number of sources with luminosities greater than 2.1 x 10(exp 38) ergs/s in the 0.4 - 8 keV band to the rate of massive (greater than 5 solar mass) star formation is the same, to within a factor of 2, for NGC 1068, the Antennae, NGC 5194 (the main galaxy in M51), and the Circinus galaxy. This suggests that the rate of production of X-ray binaries per massive star is approximately the same for galaxies with currently active star formation, including "starbursts."

  16. TESTING U.S. EPA'S ISCST -VERSION 3 MODEL ON DIOXINS: A COMPARISON OF PREDICTED AND OBSERVED AIR AND SOIL CONCENTRATIONS

    EPA Science Inventory

    The central purpose of our study was to examine the performance of the United States Environmental Protection Agency's (EPA) nonreactive Gaussian air quality dispersion model, the Industrial Source Complex Short Term Model (ISCST3) Version 98226, in predicting polychlorinated dib...

  17. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  18. Source term estimation of radioxenon released from the Fukushima Dai-ichi nuclear reactors using measured air concentrations and atmospheric transport modeling.

    PubMed

    Eslinger, P W; Biegalski, S R; Bowyer, T W; Cooper, M W; Haas, D A; Hayes, J C; Hoffman, I; Korpach, E; Yi, J; Miley, H S; Rishel, J P; Ungar, K; White, B; Woods, V T

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout across the northern hemisphere resulting from the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Sampling data from multiple International Modeling System locations are combined with atmospheric transport modeling to estimate the magnitude and time sequence of releases of (133)Xe. Modeled dilution factors at five different detection locations were combined with 57 atmospheric concentration measurements of (133)Xe taken from March 18 to March 23 to estimate the source term. This analysis suggests that 92% of the 1.24 × 10(19) Bq of (133)Xe present in the three operating reactors at the time of the earthquake was released to the atmosphere over a 3 d period. An uncertainty analysis bounds the release estimates to 54-129% of available (133)Xe inventory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  20. Refinement of Regional Distance Seismic Moment Tensor and Uncertainty Analysis for Source-Type Identification

    DTIC Science & Technology

    2011-09-01

    a NSS that lies in this negative explosion positive CLVD quadrant due to the large degree of tectonic release in this event that reversed the phase...Mellman (1986) in their analysis of fundamental model Love and Rayleigh wave amplitude and phase for nuclear and tectonic release source terms, and...1986). Estimating explosion and tectonic release source parameters of underground nuclear explosions from Rayleigh and Love wave observations, Air

  1. Surfzone alongshore advective accelerations: observations and modeling

    NASA Astrophysics Data System (ADS)

    Hansen, J.; Raubenheimer, B.; Elgar, S.

    2014-12-01

    The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.

  2. Research in atmospheric chemistry and transport

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1982-01-01

    The carbon monoxide cycle was studied by incorporating the known CO sources and sinks in a tracer model which used the winds generated by a general circulation model. The photochemical production and loss terms, which depended on OH radical concentrations, were calculated in an interactive fashion. Comparison of the computed global distribution and seasonal variations of CO with observations was used to yield constraints on the distribution and magnitude of the sources and sinks of CO, and the abundance of OH radicals in the troposphere.

  3. An Investigation of the Influence of Waves on Sediment Processes in Skagit Bay

    DTIC Science & Technology

    2011-09-30

    source term parameterizations common to most surface wave models, including wave generation by wind , energy dissipation from whitecapping, and...I. Total energy and peak frequency. Coastal Engineering (29), 47-78. Zijlema, M. Computation of wind -wave spectra in coastal waters with SWAN on unstructured grids Coastal Engineering, 2010, 57, 267-277 ...supply and wind on tidal flat sediment transport. It will be used to evaluate the capabilities of state-of-the-art open source sediment models and to

  4. Modified two-sources quantum statistical model and multiplicity fluctuation in the finite rapidity region

    NASA Astrophysics Data System (ADS)

    Ghosh, Dipak; Sarkar, Sharmila; Sen, Sanjib; Roy, Jaya

    1995-06-01

    In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,'' has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model'' as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of 16Ag/Br and 24Ag/Br interactions at a few tens of GeV energy regime.

  5. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  6. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.

  7. Numerical modeling of materials processing applications of a pulsed cold cathode electron gun

    NASA Astrophysics Data System (ADS)

    Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.

    1998-04-01

    A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.

  8. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  9. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  10. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  11. 10 CFR 110.50 - Terms.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and activity level in TBq, both for single and aggregate shipments; (E) Make, model and serial number... exporting facility; (D) Radionuclides and activity level in TBq, both for single and aggregate shipments; (E) Make, model and serial number, radionuclide, and activity level for any Category 1 and 2 sealed sources...

  12. Evaluating Uncertainty in Integrated Environmental Models: A Review of Concepts and Tools

    EPA Science Inventory

    This paper reviews concepts for evaluating integrated environmental models and discusses a list of relevant software-based tools. A simplified taxonomy for sources of uncertainty and a glossary of key terms with standard definitions are provided in the context of integrated appro...

  13. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  14. Overview of major hazards. Part 2: Source term; dispersion; combustion; blast, missiles, venting; fire; radiation; runaway reactions; toxic substances; dust explosions

    NASA Astrophysics Data System (ADS)

    Vilain, J.

    Approaches to major hazard assessment and prediction are reviewed. Source term: (phenomenology/modeling of release, influence on early stages of dispersion); dispersion (atmospheric advection, diffusion and deposition, emphasis on dense/cold gases); combustion (flammable clouds and mists covering flash fires, deflagration, transition to detonation; mostly unconfined/partly confined situations); blast formation, propagation, interaction with structures; catastrophic fires (pool fires, torches and fireballs; highly reactive substances) runaway reactions; features of more general interest; toxic substances, excluding toxicology; and dust explosions (phenomenology and protective measures) are discussed.

  15. Medium term hurricane catastrophe models: a validation experiment

    NASA Astrophysics Data System (ADS)

    Bonazzi, Alessandro; Turner, Jessica; Dobbin, Alison; Wilson, Paul; Mitas, Christos; Bellone, Enrica

    2013-04-01

    Climate variability is a major source of uncertainty for the insurance industry underwriting hurricane risk. Catastrophe models provide their users with a stochastic set of events that expands the scope of the historical catalogue by including synthetic events that are likely to happen in a defined time-frame. The use of these catastrophe models is widespread in the insurance industry but it is only in recent years that climate variability has been explicitly accounted for. In the insurance parlance "medium term catastrophe model" refers to products that provide an adjusted view of risk that is meant to represent hurricane activity on a 1 to 5 year horizon, as opposed to long term models that integrate across the climate variability of the longest available time series of observations. In this presentation we discuss how a simple reinsurance program can be used to assess the value of medium term catastrophe models. We elaborate on similar concepts as discussed in "Potential Economic Value of Seasonal Hurricane Forecasts" by Emanuel et al. (2012, WCAS) and provide an example based on 24 years of historical data of the Chicago Mercantile Hurricane Index (CHI), an insured loss proxy. Profit and loss volatility of a hypothetical primary insurer are used to score medium term models versus their long term counterpart. Results show that medium term catastrophe models could help a hypothetical primary insurer to improve their financial resiliency to varying climate conditions.

  16. NuSTAR observations of M31: globular cluster candidates found to be Z sources

    NASA Astrophysics Data System (ADS)

    Maccarone, Thomas J.; Yukita, Mihoko; Hornschemeier, Ann E.; Lehmer, Bret; Antoniou, Vallia; Ptak, Andrew; Wik, Daniel R.; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie A.; Page, Kim; Eracleous, Michael; Williams, Benjamin F.; NuSTAR mission Team

    2016-01-01

    We present the results of Swift + NuSTAR observations of 4 bright globular cluster sources in M31. Three of these had previously been suggested to be black holes on the basis of their spectra. We show that all are well fit by models indicative of Z source natures for the sources. We also discuss some reasons why the long term light curves of these objects indicate that they are more likely to be neutron stars, and discuss the discrepancy between the empirical understanding of persistent sources and theoretical predictions.

  17. Polarization and long-term variability of Sgr A* X-ray echo

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.

    2017-06-01

    We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.

  18. Towards the theory of pollinator-mediated gene flow.

    PubMed Central

    Cresswell, James E

    2003-01-01

    I present a new exposition of a model of gene flow by animal-mediated pollination between a source population and a sink population. The model's parameters describe two elements: (i) the expected portion of the source's paternity that extends to the sink population; and (ii) the dilution of this portion by within-sink pollinations. The model is termed the portion-dilution model (PDM). The PDM is a parametric restatement of the conventional view of animal-mediated pollination. In principle, it can be applied to plant species in general. I formulate a theoretical value of the portion parameter that maximizes gene flow and prescribe this as a benchmark against which to judge the performance of real systems. Existing foraging theory can be used in solving part of the PDM, but a theory for source-to-sink transitions by pollinators is currently elusive. PMID:12831465

  19. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  20. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    NASA Astrophysics Data System (ADS)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  1. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  2. Evaluation of Chemistry-Climate Model Results using Long-Term Satellite and Ground-Based Data

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.

    2005-01-01

    Chemistry-climate models attempt to bring together our best knowledge of the key processes that govern the composition of the atmosphere and its response to changes in forcing. We test these models on a process by process basis by comparing model results to data from many sources. A more difficult task is testing the model response to changes. One way to do this is to use the natural and anthropogenic experiments that have been done on the atmosphere and are continuing to be done. These include the volcanic eruptions of El Chichon and Pinatubo, the solar cycle, and the injection of chlorine and bromine from CFCs and methyl bromide. The test of the model's response to these experiments is their ability to produce the long-term variations in ozone and the trace gases that affect ozone. We now have more than 25 years of satellite ozone data. We have more than 15 years of satellite and ground-based data of HC1, HN03, and many other gases. I will discuss the testing of models using long-term satellite data sets, long-term measurements from the Network for Detection of Stratospheric Change (NDSC) , long-term ground-based measurements of ozone.

  3. A two-dimensional transient analytical solution for a ponded ditch drainage system under the influence of source/sink

    NASA Astrophysics Data System (ADS)

    Sarmah, Ratan; Tiwari, Shubham

    2018-03-01

    An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.

  4. The evolution of methods for noise prediction of high speed rotors and propellers in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1986-01-01

    Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.

  5. Application of a three-dimensional hydrodynamic model to the Himmerfjärden, Baltic Sea

    NASA Astrophysics Data System (ADS)

    Sokolov, Alexander

    2014-05-01

    Himmerfjärden is a coastal fjord-like bay situated in the north-western part of the Baltic Sea. The fjord has a mean depth of 17 m and a maximum depth of 52 m. The water is brackish (6 psu) with small salinity fluctuation (±2 psu). A sewage treatment plant, which serves about 300 000 people, discharges into the inner part of Himmerfjärden. This area is the subject of a long-term monitoring program. We are planning to develop a publicly available modelling system for this area, which will perform short-term forecast predictions of pertinent parameters (e.g., water-levels, currents, salinity, temperature) and disseminate them to users. A key component of the system is a three-dimensional hydrodynamic model. The open source Delft3D Flow system (http://www.deltaressystems.com/hydro) has been applied to model the Himmerfjärden area. Two different curvilinear grids were used to approximate the modelling domain (25 km × 50 km × 60 m). One grid has low horizontal resolution (cell size varies from 250 to 450 m) to perform long-term numerical experiments (modelling period of several months), while another grid has higher resolution (cell size varies from 120 to 250 m) to model short-term situations. In vertical direction both z-level (50 layers) and sigma coordinate (20 layers) were used. Modelling results obtained with different horizontal resolution and vertical discretisation will be presented. This model will be a part of the operational system which provides automated integration of data streams from several information sources: meteorological forecast based on the HIRLAM model from the Finnish Meteorological Institute (https://en.ilmatieteenlaitos.fi/open-data), oceanographic forecast based on the HIROMB-BOOS Model developed within the Baltic community and provided by the MyOcean Project (http://www.myocean.eu), riverine discharge from the HYPE model provided by the Swedish Meteorological Hydrological Institute (http://vattenwebb.smhi.se/modelarea/).

  6. Aerosol Source Attributions and Source-Receptor Relationships Across the Northern Hemisphere

    NASA Technical Reports Server (NTRS)

    Bian, Huisheng; Chin, Mian; Kucsera, Tom; Pan, Xiaohua; Darmenov, Anton; Colarco, Peter; Torres, Omar; Shults, Michael

    2014-01-01

    Emissions and long-range transport of air pollution pose major concerns on air quality and climate change. To better assess the impact of intercontinental transport of air pollution on regional and global air quality, ecosystems, and near-term climate change, the UN Task Force on Hemispheric Transport of Air Pollution (HTAP) is organizing a phase II activity (HTAP2) that includes global and regional model experiments and data analysis, focusing on ozone and aerosols. This study presents the initial results of HTAP2 global aerosol modeling experiments. We will (a) evaluate the model results with surface and aircraft measurements, (b) examine the relative contributions of regional emission and extra-regional source on surface PM concentrations and column aerosol optical depth (AOD) over several NH pollution and dust source regions and the Arctic, and (c) quantify the source-receptor relationships in the pollution regions that reflect the sensitivity of regional aerosol amount to the regional and extra-regional emission reductions.

  7. Validation of Operational Multiscale Environment Model With Grid Adaptivity (OMEGA).

    DTIC Science & Technology

    1995-12-01

    Center for the period of the Chernobyl Nuclear Accident. The physics of the model is tested using National Weather Service Medium Range Forecast data by...Climatology Center for the first three days following the release at the Chernobyl Nuclear Plant. A user-defined source term was developed to simulate

  8. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  9. Term amniotic fluid: an unexploited reserve of mesenchymal stromal cells for reprogramming and potential cell therapy applications.

    PubMed

    Moraghebi, Roksana; Kirkeby, Agnete; Chaves, Patricia; Rönn, Roger E; Sitnicka, Ewa; Parmar, Malin; Larsson, Marcus; Herbst, Andreas; Woods, Niels-Bjarne

    2017-08-25

    Mesenchymal stromal cells (MSCs) are currently being evaluated in numerous pre-clinical and clinical cell-based therapy studies. Furthermore, there is an increasing interest in exploring alternative uses of these cells in disease modelling, pharmaceutical screening, and regenerative medicine by applying reprogramming technologies. However, the limited availability of MSCs from various sources restricts their use. Term amniotic fluid has been proposed as an alternative source of MSCs. Previously, only low volumes of term fluid and its cellular constituents have been collected, and current knowledge of the MSCs derived from this fluid is limited. In this study, we collected amniotic fluid at term using a novel collection system and evaluated amniotic fluid MSC content and their characteristics, including their feasibility to undergo cellular reprogramming. Amniotic fluid was collected at term caesarean section deliveries using a closed catheter-based system. Following fluid processing, amniotic fluid was assessed for cellularity, MSC frequency, in-vitro proliferation, surface phenotype, differentiation, and gene expression characteristics. Cells were also reprogrammed to the pluripotent stem cell state and differentiated towards neural and haematopoietic lineages. The average volume of term amniotic fluid collected was approximately 0.4 litres per donor, containing an average of 7 million viable mononuclear cells per litre, and a CFU-F content of 15 per 100,000 MNCs. Expanded CFU-F cultures showed similar surface phenotype, differentiation potential, and gene expression characteristics to MSCs isolated from traditional sources, and showed extensive expansion potential and rapid doubling times. Given the high proliferation rates of these neonatal source cells, we assessed them in a reprogramming application, where the derived induced pluripotent stem cells showed multigerm layer lineage differentiation potential. The potentially large donor base from caesarean section deliveries, the high yield of term amniotic fluid MSCs obtainable, the properties of the MSCs identified, and the suitability of the cells to be reprogrammed into the pluripotent state demonstrated these cells to be a promising and plentiful resource for further evaluation in bio-banking, cell therapy, disease modelling, and regenerative medicine applications.

  10. Solute source depletion control of forward and back diffusion through low-permeability zones

    NASA Astrophysics Data System (ADS)

    Yang, Minjune; Annable, Michael D.; Jawitz, James W.

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.

  11. Solute source depletion control of forward and back diffusion through low-permeability zones.

    PubMed

    Yang, Minjune; Annable, Michael D; Jawitz, James W

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Energy Spectra of Abundant Cosmic-ray Nuclei in Sources, According to the ATIC Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panov, A. D.; Sokolskaya, N. V.; Zatsepin, V. I., E-mail: panov@dec1.sinp.msu.ru

    One of the main results of the ATIC (Advanced Thin Ionization Calorimeter) experiment is a collection of energy spectra of abundant cosmic-ray nuclei: protons, He, C, O, Ne, Mg, Si, and Fe measured in terms of energy per particle in the energy range from 50 GeV to tens of teraelectronvolts. In this paper, the ATIC energy spectra of abundant primary nuclei are back-propagated to the spectra in sources in terms of magnetic rigidity using a leaky-box approximation of three different GALPROP-based diffusion models of propagation that fit the latest B/C data of the AMS-02 experiment. It is shown that themore » results of a comparison of the slopes of the spectra in sources are weakly model dependent; therefore the differences of spectral indices are reliable data. A regular growth of the steepness of spectra in sources in the range of magnetic rigidity of 50–1350 GV is found for a charge range from helium to iron. This conclusion is statistically reliable with significance better than 3.2 standard deviations. The results are discussed and compared to the data of other modern experiments.« less

  13. Analysis and Synthesis of Tonal Aircraft Noise Sources

    NASA Technical Reports Server (NTRS)

    Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen

    2012-01-01

    Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.

  14. Bioaerosol releases from compost facilities: Evaluating passive and active source terms at a green waste facility for improved risk assessments

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Longhurst, P. J.; Smith, R.; Pollard, S. J. T.

    The passive and active release of bioaerosols during green waste composting, measured at source is reported for a commercial composting facility in South East (SE) England as part of a research programme focused on improving risk assessments at composting facilities. Aspergillus fumigatus and actinomycetes concentrations of 9.8-36.8×10 6 and 18.9-36.0×10 6 cfu m -3, respectively, measured during the active turning of green waste compost, were typically 3-log higher than previously reported concentrations from static compost windrows. Source depletion curves constructed for A. fumigatus during compost turning and modelled using SCREEN3 suggest that bioaerosol concentrations could reduce to background concentrations of 10 3 cfu m -3 within 100 m of this site. Authentic source term data produced from this study will help to refine the risk assessment methodologies that support improved permitting of compost facilities.

  15. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    NASA Astrophysics Data System (ADS)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  16. Detecting the permafrost carbon feedback: talik formation and increased cold-season respiration as precursors to sink-to-source transitions

    NASA Astrophysics Data System (ADS)

    Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; Romanovsky, Vladimir; Miller, Charles E.

    2018-01-01

    Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration of deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55° N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km2) by 2300, 6.2 million km2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20-200 years by high ecosystem productivity, such that talik peaks early ( ˜ 2050s, although borehole data suggest sooner) and C source transition peaks late ( ˜ 2150-2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January-February) soil warming at depth ( ˜ 2 m), (2) increasing cold-season emissions (November-April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO2 emissions, and atmospheric 14CO2 as key indicators of the permafrost C feedback.

  17. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    PubMed

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  18. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  19. Fermi Large Area Telescope Second Source Catalog

    NASA Astrophysics Data System (ADS)

    Nolan, P. L.; Abdo, A. A.; Ackermann, M.; Ajello, M.; Allafort, A.; Antolini, E.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Belfiore, A.; Bellazzini, R.; Berenji, B.; Bignami, G. F.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Bonnell, J.; Borgland, A. W.; Bottacini, E.; Bouvier, A.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Burnett, T. H.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Campana, R.; Cañadas, B.; Cannon, A.; Caraveo, P. A.; Casandjian, J. M.; Cavazzuti, E.; Ceccanti, M.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiang, J.; Chipaux, R.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Corbet, R.; Cutini, S.; D'Ammando, F.; Davis, D. S.; de Angelis, A.; DeCesar, M. E.; DeKlotz, M.; De Luca, A.; den Hartog, P. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Dumora, D.; Enoto, T.; Escande, L.; Fabiani, D.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Frailis, M.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giebels, B.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guillemot, L.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Hill, A. B.; Horan, D.; Hou, X.; Hughes, R. E.; Iafrate, G.; Itoh, R.; Jóhannesson, G.; Johnson, R. P.; Johnson, T. E.; Johnson, A. S.; Johnson, T. J.; Kamae, T.; Katagiri, H.; Kataoka, J.; Katsuta, J.; Kawai, N.; Kerr, M.; Knödlseder, J.; Kocevski, D.; Kuss, M.; Lande, J.; Landriu, D.; Latronico, L.; Lemoine-Goumard, M.; Lionetto, A. M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Madejski, G. M.; Marelli, M.; Massaro, E.; Mazziotta, M. N.; McConville, W.; McEnery, J. E.; Mehault, J.; Michelson, P. F.; Minuti, M.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Mongelli, M.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakamori, T.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paneque, D.; Panetta, J. H.; Parent, D.; Perkins, J. S.; Pesce-Rollins, M.; Pierbattista, M.; Pinchera, M.; Piron, F.; Pivato, G.; Porter, T. A.; Racusin, J. L.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Reposeur, T.; Ritz, S.; Rochester, L. S.; Romani, R. W.; Roth, M.; Rousseau, R.; Ryde, F.; Sadrozinski, H. F.-W.; Salvetti, D.; Sanchez, D. A.; Saz Parkinson, P. M.; Sbarra, C.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Shaw, M. S.; Shrader, C.; Siskind, E. J.; Smith, D. A.; Spandre, G.; Spinelli, P.; Stephens, T. E.; Strickman, M. S.; Suson, D. J.; Tajima, H.; Takahashi, H.; Takahashi, T.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tibolla, O.; Tinebra, F.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vandenbroucke, J.; Van Etten, A.; Van Klaveren, B.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wallace, E.; Wang, P.; Werner, M.; Winer, B. L.; Wood, D. L.; Wood, K. S.; Wood, M.; Yang, Z.; Zimmer, S.

    2012-04-01

    We present the second catalog of high-energy γ-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurements in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely γ-ray-producing source classes. We dedicate this paper to the memory of our colleague Patrick Nolan, who died on 2011 November 6. His career spanned much of the history of high-energy astronomy from space and his work on the Large Area Telescope (LAT) began nearly 20 years ago when it was just a concept. Pat was a central member in the operation of the LAT collaboration and he is greatly missed.

  20. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  1. Systematically biological prioritizing remediation sites based on datasets of biological investigations and heavy metals in soil

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen

    2015-04-01

    Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.

  2. Global biodiversity monitoring: from data sources to essential biodiversity variables

    USGS Publications Warehouse

    Proenca, Vania; Martin, Laura J.; Pereira, Henrique M.; Fernandez, Miguel; McRae, Louise; Belnap, Jayne; Böhm, Monika; Brummitt, Neil; Garcia-Moreno, Jaime; Gregory, Richard D.; Honrado, Joao P; Jürgens, Norbert; Opige, Michael; Schmeller, Dirk S.; Tiago, Patricia; van Sway, Chris A

    2016-01-01

    Essential Biodiversity Variables (EBVs) consolidate information from varied biodiversity observation sources. Here we demonstrate the links between data sources, EBVs and indicators and discuss how different sources of biodiversity observations can be harnessed to inform EBVs. We classify sources of primary observations into four types: extensive and intensive monitoring schemes, ecological field studies and satellite remote sensing. We characterize their geographic, taxonomic and temporal coverage. Ecological field studies and intensive monitoring schemes inform a wide range of EBVs, but the former tend to deliver short-term data, while the geographic coverage of the latter is limited. In contrast, extensive monitoring schemes mostly inform the population abundance EBV, but deliver long-term data across an extensive network of sites. Satellite remote sensing is particularly suited to providing information on ecosystem function and structure EBVs. Biases behind data sources may affect the representativeness of global biodiversity datasets. To improve them, researchers must assess data sources and then develop strategies to compensate for identified gaps. We draw on the population abundance dataset informing the Living Planet Index (LPI) to illustrate the effects of data sources on EBV representativeness. We find that long-term monitoring schemes informing the LPI are still scarce outside of Europe and North America and that ecological field studies play a key role in covering that gap. Achieving representative EBV datasets will depend both on the ability to integrate available data, through data harmonization and modeling efforts, and on the establishment of new monitoring programs to address critical data gaps.

  3. Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Drozda, T. G.; Sheikhi, R. M.; Givi, Peyman

    2001-01-01

    The objective of this research is to develop and implement new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. We have just completed two (2) years of Phase I of this research. This annual report provides a brief and up-to-date summary of our activities during the period: September 1, 2000 through August 31, 2001. In the work within the past year, a methodology termed "velocity-scalar filtered density function" (VSFDF) is developed and implemented for large eddy simulation (LES) of turbulent flows. In this methodology the effects of the unresolved subgrid scales (SGS) are taken into account by considering the joint probability density function (PDF) of all of the components of the velocity and scalar vectors. An exact transport equation is derived for the VSFDF in which the effects of the unresolved SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source terms appear in closed form. The remaining unclosed terms in this equation are modeled. A system of stochastic differential equations (SDEs) which yields statistically equivalent results to the modeled VSFDF transport equation is constructed. These SDEs are solved numerically by a Lagrangian Monte Carlo procedure. The consistency of the proposed SDEs and the convergence of the Monte Carlo solution are assessed by comparison with results obtained by an Eulerian LES procedure in which the corresponding transport equations for the first two SGS moments are solved. The unclosed SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source in the Eulerian LES are replaced by corresponding terms from VSFDF equation. The consistency of the results is then analyzed for a case of two dimensional mixing layer.

  4. Evaluating the behavior of polychlorinated biphenyl compounds in Lake Superior using a dynamic multimedia model

    NASA Astrophysics Data System (ADS)

    Khan, T.; Perlinger, J. A.; Urban, N. R.

    2017-12-01

    Certain toxic, persistent, bioaccumulative, and semivolatile compounds known as atmosphere-surface exchangeable pollutants or ASEPs are emitted into the environment by primary sources, are transported, deposited to water surfaces, and can be later re-emitted causing the water to act as a secondary source. Polychlorinated biphenyl (PCB) compounds, a class of ASEPs, are of major concern in the Laurentian Great Lakes because of their historical use primarily as additives to oils and industrial fluids, and discharge from industrial sources. Following the ban on production in the U.S. in 1979, atmospheric concentrations of PCBs in the Lake Superior region decreased rapidly. Subsequently, PCB concentrations in the lake surface water also reached near equilibrium as the atmospheric levels of PCBs declined. However, previous studies on long-term PCB levels and trends in lake trout and walleye suggested that the initial rate of decline of PCB concentrations in fish has leveled off in Lake Superior. In this study, a dynamic multimedia flux model was developed with the objective to investigate the observed levelling off of PCB concentrations in Lake Superior fish. The model structure consists of two water layers (the epilimnion and the hypolimnion), and the surface mixed sediment layer, while atmospheric deposition is the primary external pathway of PCB inputs to the lake. The model was applied for different PCB congeners having a range of hydrophobicity and volatility. Using this model, we compare the long-term trends in predicted PCB concentrations in different environmental media with relevant available measurements for Lake Superior. We examine the seasonal depositional and exchange patterns, the relative importance of different process terms, and provide the most probable source of the current observed PCB levels in Lake Superior fish. In addition, we evaluate the role of current atmospheric PCB levels in sustaining the observed fish concentrations and appraise the need for continuous atmospheric PCB monitoring by the Great Lakes Integrated Atmospheric Deposition Network. By combining the modeled lake and biota response times resulting from atmospheric PCB inputs, we predict the time scale for safe fish consumption in Lake Superior.

  5. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  6. Comparative assessment of the global fate and transport pathways of long-chain perfluorocarboxylic acids (PFCAs) and perfluorocarboxylates (PFCs) emitted from direct sources.

    PubMed

    Armitage, James M; Macleod, Matthew; Cousins, Ian T

    2009-08-01

    A global-scale multispecies mass balance model was used to simulate the long-term fate and transport of perfluorocarboxylic acids (PFCAs) with eight to thirteen carbons (C8-C13) and their conjugate bases, the perfluorocarboxylates (PFCs). The main purpose of this study was to assess the relative long-range transport (LRT) potential of each conjugate pair, collectively termed PFC(A)s, considering emissions from direct sources (i.e., manufacturing and use) only. Overall LRT potential (atmospheric + oceanic) varied as a function of chain length and depended on assumptions regarding pKa and mode of entry. Atmospheric transport makes a relatively higher contribution to overall LRT potential for PFC(A)s with longer chain length, which reflects the increasing trend in the air-water partition coefficient (K(AW)) of the neutral PFCA species with chain length. Model scenarios using estimated direct emissions of the C8, C9, and C11 PFC(A)s indicate that the mass fluxes to the Arctic marine environment associated with oceanic transport are in excess of mass fluxes from indirect sources (i.e., atmospheric transport of precursor substances such as fluorotelomer alcohols and subsequent degradation to PFCAs). Modeled concentrations of C8 and C9 in the abiotic environment are broadly consistent with available monitoring data in surface ocean waters. Furthermore, the modeled concentration ratios of C8 to C9 are reconcilable with the homologue pattern frequently observed in biota, assuming a positive correlation between bioaccumulation potential and chain length. Modeled concentration ratios of C11 to C10 are more difficult to reconcile with monitoring data in both source and remote regions. Our model results for C11 and C10 therefore imply that either (i) indirect sources are dominant or (ii) estimates of direct emission are not accurate for these homologues.

  7. Classic flea-borne transmission does not drive plague epizootics in prairie dogs.

    PubMed

    Webb, Colleen T; Brooks, Christopher P; Gage, Kenneth L; Antolin, Michael F

    2006-04-18

    We lack a clear understanding of the enzootic maintenance of the bacterium (Yersinia pestis) that causes plague and the sporadic epizootics that occur in its natural rodent hosts. A key to elucidating these epidemiological dynamics is determining the dominant transmission routes of plague. Plague can be acquired from the bites of infectious fleas (which is generally considered to occur via a blocked flea vector), inhalation of infectious respiratory droplets, or contact with a short-term infectious reservoir. We present results from a plague modeling approach that includes transmission from all three sources of infection simultaneously and uses sensitivity analysis to determine their relative importance. Our model is completely parameterized by using data from the literature and our own field studies of plague in the black-tailed prairie dog (Cynomys ludovicianus). Results of the model are qualitatively and quantitatively consistent with independent data from our field sites. Although infectious fleas might be an important source of infection and transmission via blocked fleas is a dominant paradigm in the literature, our model clearly predicts that this form of transmission cannot drive epizootics in prairie dogs. Rather, a short-term reservoir is required for epizootic dynamics. Several short-term reservoirs have the potential to affect the prairie dog system. Our model predictions of the residence time of the short-term reservoir suggest that other small mammals, infectious prairie dog carcasses, fleas that transmit plague without blockage of the digestive tract, or some combination of these three are the most likely of the candidate infectious reservoirs.

  8. D Hydrodynamics Simulation of Amazonian Seasonally Flooded Wetlands

    NASA Astrophysics Data System (ADS)

    Pinel, S. S.; Bonnet, M. P.; Da Silva, J. S.; Cavalcanti, R., Sr.; Calmant, S.

    2016-12-01

    In the low Amazonian basin, interactions between floodplains and river channels are important in terms of water exchanges, sediments, or nutrients. These wetlands are considered as hotspot of biodiversity and are among the most productive in the world. However, they are threatened by climatic changes and anthropic activities. Hence, considering the implications for predicting inundation status of floodplain habitats, the strong interactions between water circulation, energy fluxes, biogeochemical and ecological processes, detailed analyses of flooding dynamics are useful and needed. Numerical inundation models offer means to study the interactions among different water sources. Modeling floods events in this area is challenging because flows respond to dynamic hydraulic controls coming from several water sources, complex geomorphology, and vegetation. In addition, due to the difficulty of access, there is a lack of existing hydrological data. In this context, the use of monitoring systems by remote sensing is a good option. In this study, we simulated filling and drainage processes of an Amazon floodplain (Janauacá Lake, AM, Brazil) over a 6 years period (2006-2012). Common approaches of flow modeling in the Amazon region consist of coupling a 1D simulation of the main channel flood wave to a 2D simulation of the inundation of the floodplain. Here, our approach differs as the floodplain is fully simulated. Model used is the 3D model IPH-ECO, which consists of a three-dimensional hydrodynamic module coupled with an ecosystem module. The IPH-ECO hydrodynamic module solves the Reynolds-Averaged Navier-Stokes equations using a semi-implicit discretization. After having calibrated the simulation against roughness coefficients, we validated the model in terms of vertical accuracy against water levels (daily in situ and altimetrics data), in terms of flood extent against inundation maps deduced from available remote-sensed product imagery (ALOS-1/PALSAR.), and in terms of velocity. We analyzed the inter-annual variability in hydrological fluxes and inundation dynamics of the floodplain unit. Dominant sources of inflow varied seasonally: among direct rain and local runoff (November to April), Amazon River (May to August) and seepage (September to October).

  9. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  10. Spatial and temporal variations in DOM composition in ecosystems: The importance of long-term monitoring of optical properties

    Treesearch

    R. Jaffe; D. McKnight; N. Maie; R. Cory; W. H. McDowell; J.L. Campbell

    2008-01-01

    Source, transformation, and preservation mechanisms of dissolved organic matter (DOM) remain elemental questions in contemporary marine and aquatic sciences and represent a missing link in models of global elemental cycles. Although the chemical character of DOM is central to its fate in the global carbon cycle, DOM characterizations in long-term ecological research...

  11. Modified two-sources quantum statistical model and multiplicity fluctuation in the finite rapidity region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, D.; Sarkar, S.; Sen, S.

    1995-06-01

    In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,`` has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model`` as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of {sup 16}O-Ag/Br and {sup 24}Mg-Ag/Br interactions at a few tens of GeV energy regime.

  12. Incorporating the eruptive history in a stochastic model for volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Bebbington, Mark

    2008-08-01

    We show how a stochastic version of a general load-and-discharge model for volcanic eruptions can be implemented. The model tracks the history of the volcano through a quantity proportional to stored magma volume. Thus large eruptions can influence the activity rate for a considerable time following, rather than only the next repose as in the time-predictable model. The model can be fitted to data using point-process methods. Applied to flank eruptions of Mount Etna, it exhibits possible long-term quasi-cyclic behavior, and to Mauna Loa, a long-term decrease in activity. An extension to multiple interacting sources is outlined, which may be different eruption styles or locations, or different volcanoes. This can be used to identify an 'average interaction' between the sources. We find significant evidence that summit eruptions of Mount Etna are dependent on preceding flank eruptions, with both flank and summit eruptions being triggered by the other type. Fitted to Mauna Loa and Kilauea, the model had a marginally significant relationship between eruptions of Mauna Loa and Kilauea, consistent with the invasion of the latter's plumbing system by magma from the former.

  13. High order finite volume WENO schemes for the Euler equations under gravitational fields

    NASA Astrophysics Data System (ADS)

    Li, Gang; Xing, Yulong

    2016-07-01

    Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.

  14. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of atmospheric dispersion model with improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2014-06-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.

  15. Effects of topography and crustal heterogeneities on the source estimation of LP event at Kilauea volcano

    USGS Publications Warehouse

    Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.

    2008-01-01

    The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.

  16. IR Image upconversion using band-limited ASE illumination fiber sources.

    PubMed

    Maestre, H; Torregrosa, A J; Capmany, J

    2016-04-18

    We study the field-of-view (FOV) of an upconversion imaging system that employs an Amplified Spontaneous Emission (ASE) fiber source to illuminate a transmission target. As an intermediate case between narrowband laser and thermal illumination, an ASE fiber source allows for higher spectral intensity than thermal illumination and still keeps a broad wavelength spectrum to take advantage of an increased non-collinear phase-matching angle acceptance that enlarges the FOV of the upconversion system when compared to using narrowband laser illumination. A model is presented to predict the angular acceptance of the upconverter in terms of focusing and ASE spectral width and allocation. The model is experimentally checked in case of 1550-630 nm upconversion.

  17. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  18. Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano

    2016-04-01

    On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.

  19. Seasonal Phosphorus Sources and Loads to Upper Klamath Lake, Oregon, as Determined by a Dynamic SPARROW Model

    NASA Astrophysics Data System (ADS)

    Saleh, D.; Domagalski, J. L.; Smith, R. A.

    2016-12-01

    The SPARROW (SPAtially-Referenced Regression On Watershed Attributes) model, developed by the U.S. Geological Survey, has been used to identify and quantify the sources of nitrogen and phosphorus in watersheds and to predict their fluxes and concentration at specified locations downstream. Existing SPARROW models use a hybrid statistical approach to describe an annual average ("steady-state") relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. Although these annual models are useful for some management purposes, many water quality issues stem from intra- and inter-annual changes in constituent sources, hydrologic forcing, or other environmental conditions, which cause a lag between watershed inputs and stream water quality. We are developing a seasonal dynamic SPARROW model of sources, fluxes, and yields of phosphorus for the watershed (approximately 9,700 square kilometers) draining to Upper Klamath Lake, Oregon. The lake is hyper-eutrophic and various options are being considered for water quality improvement. The model was calibrated with 11 years of water quality data (2000 to 2010) and simulates seasonal loads and yields for a total of 44 seasons. Phosphorus sources to the watershed include animal manure, farm fertilizer, discharges of treated wastewater, and natural sources (soil and streambed sediment). The model predicts that phosphorus delivery to the lake is strongly affected by intra- and inter-annual changes in precipitation and by temporary seasonal storage of phosphorus in the watershed. The model can be used to predict how different management actions for mitigating phosphorus sources might affect phosphorus loading to the lake as well as the time required for any changes in loading to occur following implementation of the action.

  20. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  1. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  2. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  3. Experimental study of the thermal-acoustic efficiency in a long turbulent diffusion-flame burner

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.

    1983-01-01

    A two-year study of noise production in a long tubular burner is described. The research was motivated by an interest in understanding and eventually reducing core noise in gas turbine engines. The general approach is to employ an acoustic source/propagation model to interpret the sound pressure spectrum in the acoustic far field of the burner in terms of the source spectrum that must have produced it. In the model the sources are assumed to be due uniquely to the unsteady component of combustion heat release; thus only direct combustion-noise is considered. The source spectrum is then the variation with frequency of the thermal-acoustic efficiency, defined as the fraction of combustion heat release which is converted into acoustic energy at a given frequency. The thrust of the research was to study the variation of the source spectrum with the design and operating parameters of the burner.

  4. pyJac: Analytical Jacobian generator for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen

    2017-06-01

    Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.

  5. Soundscapes

    DTIC Science & Technology

    2013-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  6. Soundscapes

    DTIC Science & Technology

    2012-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  7. Depolarization on Earth-space paths

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Sources of depolarization effects on the propagation paths of orthogonally-polarized information channels are considered. The main sources of depolarization at millimeter wave frequencies are hydrometeor absorption and scattering in the troposphere. Terms are defined. Mathematical formulations for the effects of the propagation medium characteristics and antenna performance on signals in dual polarization Earth-space links are presented. Techniques for modeling rain and ice depolarization are discussed.

  8. Steady-state solution of the semi-empirical diffusion equation for area sources. [air pollution studies

    NASA Technical Reports Server (NTRS)

    Lebedeff, S. A.; Hameed, S.

    1975-01-01

    The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.

  9. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  10. The Study of High-Speed Surface Dynamics Using a Pulsed Proton Beam

    NASA Astrophysics Data System (ADS)

    Buttler, William; Stone, Benjamin; Oro, David; Dimonte, Guy; Preston, Dean; Cherne, Frank; Germann, Timothy; Terrones, Guillermo; Tupa, Dale

    2011-06-01

    Los Alamos National Laboratory is presently engaged in development and implementation of ejecta source term and transport models for integration into LANL hydrodynamic computer codes. Experimental support for the effort spans a broad array of activities, including ejecta source term measurements from machine roughened Sn surfaces shocked by HE or flyer plates. Because the underlying postulate for ejecta formation is that ejecta are characterized by Richtmyer-Meshkov instability (RMI) phenomena, a key element of the theory and modeling effort centers on validation and verification RMI experiments at the LANSCE Proton Radiography Facility (pRad) to compare with modeled ejecta measurements. Here we present experimental results used to define and validate a physics based ejecta model together with remarkable, unexpected results of Sn instability growth in vacuum and gasses, and Sn and Cu RM growth that reveals the sensitivity of the RM instability to the yield strength of the material, Cu. The motivation of this last subject, RM growth linked to material strength, is to probe the shock pressure regions over which ejecta begins to form. Presenter

  11. Verification and Improvement of Flamelet Approach for Non-Premixed Flames

    NASA Technical Reports Server (NTRS)

    Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.

    1997-01-01

    Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).

  12. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  13. Toward a Mechanistic Source Term in Advanced Reactors: Characterization of Radionuclide Transport and Retention in a Sodium Cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David

    A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the current state of knowledge is extensive, and in most areas may be sufficient. Several knowledge gaps were identified, such as uncertainty in release from molten fuel and availability of thermodynamic data for lanthanides and actinides in liquid sodium. However, the overall findings suggest that high retention rates can be expected within the fuel and primary sodium for all radionuclides other than noble gases.« less

  14. An initial SPARROW model of land use and in-stream controls on total organic carbon in streams of the conterminous United States

    USGS Publications Warehouse

    Shih, Jhih-Shyang; Alexander, Richard B.; Smith, Richard A.; Boyer, Elizabeth W.; Shwarz, Grogory E.; Chung, Susie

    2010-01-01

    Watersheds play many important roles in the carbon cycle: (1) they are a site for both terrestrial and aquatic carbon dioxide (CO2) removal through photosynthesis; (2) they transport living and decomposing organic carbon in streams and groundwater; and (3) they store organic carbon for widely varying lengths of time as a function of many biogeochemical factors. Using the U.S. Geological Survey (USGS) Spatially Referenced Regression on Watershed Attributes (SPARROW) model, along with long-term monitoring data on total organic carbon (TOC), this research quantitatively estimates the sources, transport, and fate of the long-term mean annual load of TOC in streams of the conterminous United States. The model simulations use surrogate measures of the major terrestrial and aquatic sources of organic carbon to estimate the long-term mean annual load of TOC in streams. The estimated carbon sources in the model are associated with four land uses (urban, cultivated, forest, and wetlands) and autochthonous fixation of carbon (stream photosynthesis). Stream photosynthesis is determined by reach-level application of an empirical model of stream chlorophyll based on total phosphorus concentration, and a mechanistic model of photosynthetic rate based on chlorophyll, average daily solar irradiance, water column light attenuation, and reach dimensions. It was found that the estimate of in-stream photosynthesis is a major contributor to the mean annual TOC load per unit of drainage area (that is, yield) in large streams, with a median share of about 60 percent of the total mean annual carbon load in streams with mean flows above 500 cubic feet per second. The interquartile range of the model predictions of TOC from in-stream photosynthesis is from 0.1 to 0.4 grams (g) carbon (C) per square meter (m-2) per day (day-1) for the approximately 62,000 stream reaches in the continental United States, which compares favorably with the reported literature range for net carbon fixation by phytoplankton in lakes and streams. The largest contributors per unit of drainage area to the mean annual stream TOC load among the terrestrial sources are, in descending order: wetlands, urban lands, mixed forests, agricultural lands, evergreen forests, and deciduous forests . It was found that the SPARROW model estimates of TOC contributions to streams associated with these land uses are also consistent with literature estimates. SPARROW model calibration results are used to simulate the delivery of TOC loads to the coastal areas of seven major regional drainages. It was found that stream photosynthesis is the largest source of the TOC yields ( about 50 percent) delivered to the coastal waters in two of the seven regional drainages (the Pacific Northwest and Mississippi-Atchafalaya-Red River basins ), whereas terrestrial sources are dominant (greater than 60 percent) in all other regions (North Atlantic, South Atlantic-Gulf, California, Texas-Gulf, and Great Lakes).

  15. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  16. Short-term energy outlook. Volume 2. Methodology

    NASA Astrophysics Data System (ADS)

    1983-05-01

    Recent changes in forecasting methodology for nonutility distillate fuel oil demand and for the near-term petroleum forecasts are discussed. The accuracy of previous short-term forecasts of most of the major energy sources published in the last 13 issues of the Outlook is evaluated. Macroeconomic and weather assumptions are included in this evaluation. Energy forecasts for 1983 are compared. Structural change in US petroleum consumption, the use of appropriate weather data in energy demand modeling, and petroleum inventories, imports, and refinery runs are discussed.

  17. Comparisons of thermospheric density data sets and models

    NASA Astrophysics Data System (ADS)

    Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin

    During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.

  18. Learning Discriminative Sparse Models for Source Separation and Mapping of Hyperspectral Imagery

    DTIC Science & Technology

    2010-10-01

    allowing spectroscopic analysis. The data acquired by these spectrometers play significant roles in biomedical, environmental, land-survey, and...noisy in nature , so there are differences between the true and the observed signals. In addition, there are distortions associated with atmosphere... handwriting classification, showing advantages of using both terms instead of only using the reconstruction term as in previous approaches. C. Dictionary

  19. Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions

    NASA Astrophysics Data System (ADS)

    Valentine, John S.

    2013-09-01

    By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.

  20. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  1. Effects of Voice Coding and Speech Rate on a Synthetic Speech Display in a Telephone Information System

    DTIC Science & Technology

    1988-05-01

    Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice

  2. Assessment of source-specific health effects associated with an unknown number of major sources of multiple air pollutants: a unified Bayesian approach.

    PubMed

    Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H

    2014-07-01

    There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  4. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  5. Evaluation of sensor, environment and operational factors impacting the use of multiple sensor constellations for long term resource monitoring

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan

    Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.

  6. A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags

    NASA Astrophysics Data System (ADS)

    Meng, S.; Xie, X.

    2015-12-01

    In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.

  7. Reply by the Authors to C. K. W. Tam

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.

    2002-01-01

    The prediction of noise generation and radiation by turbulence has been the subject of continuous research for over fifty years. The essential problem is how to model the noise sources when one s knowledge of the detailed space-time properties of the turbulence is limited. We attempted to provide a comparison of models based on acoustic analogies and recent alternative models. Our goal was to demonstrate that the predictive capabilities of any model are based on the choice of the turbulence property that is modeled as a source of noise. Our general definition of an acoustic analogy is a rearrangement of the equations of motion into the form L(u) = Q, where L is a linear operator that reduces to an acoustic propagation operator outside a region upsilon; u is a variable that reduces to acoustic pressure (or a related linear acoustic variable) outside upsilon; and Q is a source term that can be meaningfully estimated without knowing u and tends to zero outside upsilon.

  8. Understanding the dust cycle at high latitudes: integrating models and observations

    NASA Astrophysics Data System (ADS)

    Albani, S.; Mahowald, N. M.; Maggi, V.; Delmonte, B.; Winckler, G.; Potenza, M. A. C.; Baccolo, G.; Balkanski, Y.

    2017-12-01

    Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. Paleodust archives from land, ocean, and ice sheets preserve the history of dust deposition for a range of spatial scales from close to the major hemispheric sources to remote sinks such as the polar ice sheets. In each hemisphere common features on the glacial-interglacial time scale mark the baseline evolution of the dust cycle, and inspired the hypothesis that increased dust deposition to ocean stimulated the glacial biological pump contributing to the reduction of atmospheric carbon dioxide levels. On the other hand finer geographical and temporal scales features are superposed to these glacial-interglacial trends, providing the chance of a more sophisticated understanding of the dust cycle, for instance allowing distinctions in terms of source availability or transport patterns as recorded by different records. As such paleodust archives can prove invaluable sources of information, especially when characterized by a quantitative estimation of the mass accumulation rates, and interpreted in connection with climate models. We review our past work and present ongoing research showing how climate models can help in the interpretation of paleodust records, as well as the potential of the same observations for constraining the representation of the global dust cycle embedded in Earth System Models, both in terms of magnitude and physical parameters related to particle sizes and optical properties. Finally we show the impacts on climate, based on this kind of observationally constrained model simulations.

  9. Two-micron Laser Atmospheric Wind Sounder (LAWS) pointing/tracking study

    NASA Technical Reports Server (NTRS)

    Manlief, Scott

    1995-01-01

    The objective of the study was to identify and model major sources of short-term pointing jitter for a free-flying, full performance 2 micron LAWS system and evaluate the impact of the short-term jitter on wind-measurement performance. A fast steering mirror controls system was designed for the short-term jitter compensation. The performance analysis showed that the short-term jitter performance of the controls system over the 5.2 msec round-trip time for a realistic spacecraft environment was = 0.3 micro rad, rms, within the specified value of less than 0.5 micro rad, rms, derived in a 2 micron LAWS System Study. Disturbance modes were defined for: (1) the Bearing and Power Transfer Assembly (BAPTA) scan bearing, (2) the spacecraft reaction wheel torques, and (3) the solar array drive torques. The scan bearing disturbance was found to be the greatest contributing noise source to the jitter performance. Disturbances from the fast steering mirror reaction torques and a boom-mounted cross-link antenna clocking were also considered but were judged to be small compared to the three principal disturbance sources above and were not included in the final controls analysis.

  10. Analytical method for optimal source reduction with monitored natural attenuation in contaminated aquifers

    USGS Publications Warehouse

    Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,

    2003-01-01

    A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.

  11. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  12. Theory of step on leading edge of negative corona current pulse

    NASA Astrophysics Data System (ADS)

    Gupta, Deepak K.; Mahajan, Sangeeta; John, P. I.

    2000-03-01

    Theoretical models taking into account different feedback source terms (e.g., ion-impact electron emission, photo-electron emission, field emission, etc) have been proposed for the existence and explanation of the shape of negative corona current pulse, including the step on the leading edge. In the present work, a negative corona current pulse with the step on the leading edge is obtained in the presence of ion-impact electron emission feedback source only. The step on the leading edge is explained in terms of the plasma formation process and enhancement of the feedback source. Ionization wave-like movement toward the cathode is observed after the step. The conditions for the existence of current pulse, with and without the step on the leading edge, are also described. A qualitative comparison with earlier theoretical and experimental work is also included.

  13. Detecting the permafrost carbon feedback: talik formation and increased cold-season respiration as precursors to sink-to-source transitions

    DOE PAGES

    Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; ...

    2018-01-12

    Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less

  14. Detecting the permafrost carbon feedback: talik formation and increased cold-season respiration as precursors to sink-to-source transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.

    Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less

  15. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  16. Reduced mercury deposition in New Hampshire from 1996 to 2002 due to changes in local sources.

    PubMed

    Han, Young-Ji; Holsen, Thomas M; Evers, David C; Driscoll, Charles T

    2008-12-01

    Changes in deposition of gaseous divalent mercury (Hg(II)) and particulate mercury (Hg(p)) in New Hampshire due to changes in local sources from 1996 to 2002 were assessed using the Industrial Source Complex Short Term (ISCST3) model (regional and global sources and Hg atmospheric reactions were not considered). Mercury (Hg) emissions in New Hampshire and adjacent areas decreased significantly (from 1540 to 880 kg yr(-1)) during this period, and the average annual modeled deposition of total Hg also declined from 17 to 7.0 microg m(-2) yr(-1) for the same period. In 2002, the maximum amount of Hg deposition was modeled to be in southern New Hampshire, while for 1996 the maximum deposition occurred farther north and east. The ISCST3 was also used to evaluate two future scenarios. The average percent difference in deposition across all cells was 5% for the 50% reduction scenario and 9% for the 90% reduction scenario.

  17. Risk assessment of water pollution sources based on an integrated k-means clustering and set pair analysis method in the region of Shiyan, China.

    PubMed

    Li, Chunhui; Sun, Lian; Jia, Junxiang; Cai, Yanpeng; Wang, Xuan

    2016-07-01

    Source water areas are facing many potential water pollution risks. Risk assessment is an effective method to evaluate such risks. In this paper an integrated model based on k-means clustering analysis and set pair analysis was established aiming at evaluating the risks associated with water pollution in source water areas, in which the weights of indicators were determined through the entropy weight method. Then the proposed model was applied to assess water pollution risks in the region of Shiyan in which China's key source water area Danjiangkou Reservoir for the water source of the middle route of South-to-North Water Diversion Project is located. The results showed that eleven sources with relative high risk value were identified. At the regional scale, Shiyan City and Danjiangkou City would have a high risk value in term of the industrial discharge. Comparatively, Danjiangkou City and Yunxian County would have a high risk value in terms of agricultural pollution. Overall, the risk values of north regions close to the main stream and reservoir of the region of Shiyan were higher than that in the south. The results of risk level indicated that five sources were in lower risk level (i.e., level II), two in moderate risk level (i.e., level III), one in higher risk level (i.e., level IV) and three in highest risk level (i.e., level V). Also risks of industrial discharge are higher than that of the agricultural sector. It is thus essential to manage the pillar industry of the region of Shiyan and certain agricultural companies in the vicinity of the reservoir to reduce water pollution risks of source water areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A modeling study of coarse particulate matter pollution in Beijing: regional source contributions and control implications for the 2008 summer Olympics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litao Wang; Jiming Hao; Kebin He

    In the last 10 yr, Beijing has made a great effort to improve its air quality. However, it is still suffering from regional coarse particulate matter (PM10) pollution that could be a challenge to the promise of clean air during the 2008 Olympics. To provide scientific guidance on regional air pollution control, the Mesoscale Modeling System Generation 5 (MM5) and the Models-3/Community Multiscale Air Quality Model (CMAQ) air quality modeling system was used to investigate the contributions of emission sources outside the Beijing area to pollution levels in Beijing. The contributions to the PM10 concentrations in Beijing were assessed formore » the following sources: power plants, industry, domestic sources, transportation, agriculture, and biomass open burning. In January, it is estimated that on average 22% of the PM10 concentrations can be attributed to outside sources, of which domestic and industrial sources contributed 37 and 31%, respectively. In August, as much as 40% of the PM10 concentrations came from regional sources, of which approximately 41% came from industry and 31% from power plants. However, the synchronous analysis of the hourly concentrations, regional contributions, and wind vectors indicates that in the heaviest pollution periods the local emission sources play a more important role. The implications are that long-term control strategies should be based on regional-scale collaborations, and that emission abatement of local sources may be more effective in lowering the PM10 concentration levels on the heavy pollution days. Better air quality can be attained during the Olympics by placing effective emission controls on the local sources in Beijing and by controlling emissions from industry and power plants in the surrounding regions. 44 refs., 6 figs., 3 tabs.« less

  19. Improvements and limitations on understanding of atmospheric processes of Fukushima Daiichi NPS radioactivity

    NASA Astrophysics Data System (ADS)

    Yamazawa, Hiromi; Terasaka, Yuta; Mizutani, Kenta; Sugiura, Hiroki; Hirao, Shigekazu

    2017-04-01

    Understanding on the release of radioactivity into the atmosphere from the accidental units of Fukushima Daiichi Nuclear Power Station have been improved owing to recent analyses of atmospheric concentrations of radionuclide. Our analysis of gamma-ray spectra from monitoring posts located about 100 km to the south of the site revealed temporal changes of atmospheric concentrations of several key nuclides including noble gas Xe-133 in addition to radio-iodine and cesium nuclides, including I-131 and Cs-137, at a 10 minute interval. By using the atmospheric concentration data, in combination with an inverse atmospheric transport modelling with a Bayesian statistical method, a modification was proposed for the widely used Katata's source term. A source term for Xe-133 was also proposed. Although the atmospheric concentration data and the source terms help us understand the atmospheric transport processes of radionuclides, they still have significant uncertainty due to limitations in availability of the concentration data. There still remain limitations in the atmospheric transport modeling. The largest uncertainty in the model is in the deposition processes. It had been pointed out that, in the 100 km range from the accidental site, there were locations at which the ambient dose rate significantly increased a few hours before precipitation detectors recorded the start of rain. According to our analysis, the dose rate increase was not directly caused by the air-borne radioactivity but by deposition. This phenomenon can be attributed to a deposition process in which evaporating precipitation enhances efficiency of deposition even in a case where no precipitation is observed at ground level.

  20. Field measurements and modeling to resolve m2 to km2 CH4 emissions for a complex urban source: An Indiana landfill study

    USDA-ARS?s Scientific Manuscript database

    Large uncertainties for landfill CH4 emissions due to spatial and temporal variabilities remain unresolved by short-term field campaigns and historic GHG inventory models. Using four field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) ...

  1. A study of the variable impedance surface concept as a means for reducing noise from jet interaction with deployed lift-augmenting flaps

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.; Kadman, Y.; Chanaud, R. C.

    1972-01-01

    The feasibility of quieting the externally-blown-flap (EBF) noise sources which are due to interaction of jet exhaust flow with deployed flaps was demonstrated on a 1/15-scale 3-flap EBF model. Sound field characteristics were measured and noise reduction fundamentals were reviewed in terms of source models. Test of the 1/15-scale model showed broadband noise reductions of up to 20 dB resulting from combination of variable impedance flap treatment and mesh grids placed in the jet flow upstream of the flaps. Steady-state lift, drag, and pitching moment were measured with and without noise reduction treatment.

  2. Ragweed (Ambrosia) pollen source inventory for Austria.

    PubMed

    Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F

    2015-08-01

    This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  3. Temporal X-ray astronomy with a pinhole camera. [cygnus and scorpius constellation

    NASA Technical Reports Server (NTRS)

    Holt, S. S.

    1975-01-01

    Preliminary results from the Ariel-5 all-sky X-ray monitor are presented, along with sufficient experiment details to define the experiment sensitivity. Periodic modulation of the X-ray emission was investigated from three sources with which specific periods were associated, with the results that the 4.8 hour variation from Cyg X-3 was confirmed, a long-term average 5.6 day variation from Cyg X-1 was discovered, and no detectable 0.787 day modulation of Sco X-1 was observed. Consistency of the long-term Sco X-1 emission with a shot-noise model is discussed, wherein the source behavior is shown to be interpretable as approximately 100 flares per day, each with a duration of several hours. A sudden increase in the Cyg X-1 intensity by almost a factor of three on 22 April 1975 is reported, after 5 months of relative source constancy. The light curve of a bright nova-like transient source in Triangulum is presented, and compared with previously observed transient sources. Preliminary evidence for the existence of X-ray bursts with duration less than 1 hour is offered.

  4. Long Term Leaching of Chlorinated Solvents from Source Zones in Low Permeability Settings with Fractures

    NASA Astrophysics Data System (ADS)

    Bjerg, P. L.; Chambon, J.; Troldborg, M.; Binning, P. J.; Broholm, M. M.; Lemming, G.; Damgaard, I.

    2008-12-01

    Groundwater contamination by chlorinated solvents, such as perchloroethylene (PCE), often occurs via leaching from complex sources located in low permeability sediments such as clayey tills overlying aquifers. Clayey tills are mostly fractured, and contamination migrating through the fractures spreads to the low permeability matrix by diffusion. This results in a long term source of contamination due to back-diffusion. Leaching from such sources is further complicated by microbial degradation under anaerobic conditions to sequentially form the daughter products trichloroethylene, cis-dichloroethylene (cis-DCE), vinyl chloride (VC) and ethene. This process can be enhanced by addition of electron donors and/or bioaugmentation and is termed Enhanced Reductive Dechlorination (ERD). This work aims to improve our understanding of the physical, chemical and microbial processes governing source behaviour under natural and enhanced conditions. That understanding is applied to risk assessment, and to determine the relationship and time frames of source clean up and plume response. To meet that aim, field and laboratory observations are coupled to state of the art models incorporating new insights of contaminant behaviour. The long term leaching of chlorinated ethenes from clay aquitards is currently being monitored at a number of Danish sites. The observed data is simulated using a coupled fracture flow and clay matrix diffusion model. Sequential degradation is represented by modified Monod kinetics accounting for competitive inhibition between the chlorinated ethenes. The model is constructed using Comsol Multiphysics, a generic finite- element partial differential equation solver. The model is applied at two well characterised field sites with respect to hydrogeology, fracture network, contaminant distribution and microbial processes (lab and field experiments). At the study sites (Sortebrovej and Vadsbyvej), the source areas are situated in a clayey till with fractures and interbedded sand lenses. The field sites are both highly contaminated with chlorinated ethenes which impact the underlying sand aquifer. Anaerobic dechlorination is taking place, and cis-DCE and VC have been found in significant amounts in the matrix. Full scale remediation using ERD was implemented at Sortebrovej in 2006, and ERD has been suggested as a remedy at Vadsbyvej. Results reveal several interesting findings. The physical processes of matrix diffusion and advection in the fractures seem to be more important than the microbial degradation processes for estimation of the time frames and the distance between fractures is amongst the most sensitive model parameters. However, the inclusion of sequential degradation is crucial to determining the composition of contamination leaching into the underlying aquifer. Degradation products like VC will peak at an earlier stage compared to the mother compound due to a higher mobility. The findings highlight a need for improved characterization of low permeability aquitards lying above aquifers used for water supply. The fracture network in aquitards is currently poorly described at larger depths (below 5-8 m) and the effect of sand lenses on leaching behaviour is not well understood. The microbial processes are assumed to be taking place in the fracture system, but the interaction with and processes in the matrix need to be further explored. Development of new methods for field site characterisation and integrated field and model expertise are crucial for the design of remedial actions and for risk assessment of contaminated sites in low permeability settings.

  5. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  6. Assessing risk of non-compliance of phosphorus standards for lakes in England and Wales

    NASA Astrophysics Data System (ADS)

    Duethmann, D.; Anthony, S.; Carvalho, L.; Spears, B.

    2009-04-01

    High population densities, use of inorganic fertilizer and intensive livestock agriculture have increased phosphorus loads to lakes, and accelerated eutrophication is a major pressure for many lakes. The EC Water Framework Directive (WFD) requires that good chemical and ecological quality is restored in all surface water bodies by 2015. Total phosphorus (TP) standards for lakes in England and Wales have been agreed recently, and our aim was to estimate what percentage of lakes in England and Wales is at risk of failing these standards. With measured lake phosphorus concentrations only being available for a small number of lakes, such an assessment had to be model based. The study also makes a source apportionment of phosphorus inputs into lakes. Phosphorus loads were estimated from a range of sources including agricultural loads, sewage effluents, septic tanks, diffuse urban sources, atmospheric deposition, groundwater and bank erosion. Lake phosphorus concentrations were predicted using the Vollenweider model, and the model framework was satisfactorily tested against available observed lake concentration data. Even though predictions for individual lakes remain uncertain, results for a population of lakes are considered as sufficiently robust. A scenario analysis was carried out to investigate to what extent reductions in phosphorus loads would increase the number of lakes achieving good ecological status in terms of TP standards. Applying the model to all lakes in England and Wales greater than 1 ha, it was calculated that under current conditions roughly two thirds of the lakes would fail the good ecological status with respect to phosphorus. According to our estimates, agricultural phosphorus loads represent the most frequent dominant source for the majority of catchments, but diffuse urban runoff also is important in many lakes. Sewage effluents are the most frequent dominant source for large lake catchments greater than 100 km². The evaluation in terms of total load can be misleading in terms of what sources need to be tackled by catchment management for most of the lakes. For example sewage effluents are responsible for the majority of the total load but are the dominant source in only a small number of larger lake catchments. If loads from all sources were halved this would potentially increase the number of complying lakes to two thirds but require substantial measures to reduce phosphorus inputs to lakes. For agriculture, required changes would have to go beyond improvements of agricultural practise, and need to include reducing the intensity of land use. The time required for many lakes to respond to reduced nutrient loading is likely to extend beyond the current timelines of the WFD due to internal loading and biological resistances.

  7. Chern-Simons Term: Theory and Applications.

    NASA Astrophysics Data System (ADS)

    Gupta, Kumar Sankar

    1992-01-01

    We investigate the quantization and applications of Chern-Simons theories to several systems of interest. Elementary canonical methods are employed for the quantization of abelian and nonabelian Chern-Simons actions using ideas from gauge theories and quantum gravity. When the spatial slice is a disc, it yields quantum states at the edge of the disc carrying a representation of the Kac-Moody algebra. We next include sources in this model and their quantum states are shown to be those of a conformal family. Vertex operators for both abelian and nonabelian sources are constructed. The regularized abelian Wilson line is proved to be a vertex operator. The spin-statistics theorem is established for Chern-Simons dynamics using purely geometrical techniques. Chern-Simons action is associated with exotic spin and statistics in 2 + 1 dimensions. We study several systems in which the Chern-Simons action affects the spin and statistics. The first class of systems we study consist of G/H models. The solitons of these models are shown to obey anyonic statistics in the presence of a Chern-Simons term. The second system deals with the effect of the Chern -Simons term in a model for high temperature superconductivity. The coefficient of the Chern-Simons term is shown to be quantized, one of its possible values giving fermionic statistics to the solitons of this model. Finally, we study a system of spinning particles interacting with 2 + 1 gravity, the latter being described by an ISO(2,1) Chern-Simons term. An effective action for the particles is obtained by integrating out the gauge fields. Next we construct operators which exchange the particles. They are shown to satisfy the braid relations. There are ambiguities in the quantization of this system which can be exploited to give anyonic statistics to the particles. We also point out that at the level of the first quantized theory, the usual spin-statistics relation need not apply to these particles.

  8. Associations of Mortality with Long-Term Exposures to Fine and Ultrafine Particles, Species and Sources: Results from the California Teachers Study Cohort

    PubMed Central

    Hu, Jianlin; Goldberg, Debbie; Reynolds, Peggy; Hertz, Andrew; Bernstein, Leslie; Kleeman, Michael J.

    2015-01-01

    Background Although several cohort studies report associations between chronic exposure to fine particles (PM2.5) and mortality, few have studied the effects of chronic exposure to ultrafine (UF) particles. In addition, few studies have estimated the effects of the constituents of either PM2.5 or UF particles. Methods We used a statewide cohort of > 100,000 women from the California Teachers Study who were followed from 2001 through 2007. Exposure data at the residential level were provided by a chemical transport model that computed pollutant concentrations from > 900 sources in California. Besides particle mass, monthly concentrations of 11 species and 8 sources or primary particles were generated at 4-km grids. We used a Cox proportional hazards model to estimate the association between the pollutants and all-cause, cardiovascular, ischemic heart disease (IHD), and respiratory mortality. Results We observed statistically significant (p < 0.05) associations of IHD with PM2.5 mass, nitrate, elemental carbon (EC), copper (Cu), and secondary organics and the sources gas- and diesel-fueled vehicles, meat cooking, and high-sulfur fuel combustion. The hazard ratio estimate of 1.19 (95% CI: 1.08, 1.31) for IHD in association with a 10-μg/m3 increase in PM2.5 is consistent with findings from the American Cancer Society cohort. We also observed significant positive associations between IHD and several UF components including EC, Cu, metals, and mobile sources. Conclusions Using an emissions-based model with a 4-km spatial scale, we observed significant positive associations between IHD mortality and both fine and ultrafine particle species and sources. Our results suggest that the exposure model effectively measured local exposures and facilitated the examination of the relative toxicity of particle species. Citation Ostro B, Hu J, Goldberg D, Reynolds P, Hertz A, Bernstein L, Kleeman MJ. 2015. Associations of mortality with long-term exposures to fine and ultrafine particles, species and sources: results from the California Teachers Study cohort. Environ Health Perspect 123:549–556; http://dx.doi.org/10.1289/ehp.1408565 PMID:25633926

  9. Fermi Large Area Telescope First Source Catalog

    DOE PAGES

    Abdo, A. A.; Ackermann, M.; Ajello, M.; ...

    2010-05-25

    Here, we present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions,more » defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. In conclusion, care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.« less

  10. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  11. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  12. Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.

    PubMed

    Karani, Hamid; Huber, Christian

    2015-02-01

    In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.

  13. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  14. CASSIA--a dynamic model for predicting intra-annual sink demand and interannual growth variation in Scots pine.

    PubMed

    Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki

    2015-04-01

    The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  15. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  16. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE PAGES

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    2016-04-01

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  17. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  18. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  19. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  20. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  1. High-Resolution 2D Lg and Pg Attenuation Models in the Basin and Range Region with Implications for Frequency-Dependent Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.

    Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less

  2. High-Resolution 2D Lg and Pg Attenuation Models in the Basin and Range Region with Implications for Frequency-Dependent Q

    DOE PAGES

    Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.

    2017-10-24

    Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less

  3. High‐Resolution 2D Lg and Pg Attenuation Models in the Basin and Range Region with Implications for Frequency‐Dependent Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.

    2017-10-24

    Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less

  4. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  5. Defense Coastal/Estuarine Research Program (DCERP) Baseline Monitoring Plan

    DTIC Science & Technology

    2007-09-19

    climatological stress (e.g., temperature, drought) and shorter-term air pollutant stress (oxidants and metals ). Heavy metals of fine PM have been...speciation of the fine and coarse PM fractions will allow distinction between different PM sources such as wind blown soil dust, including dust...emitting 12% of the total PM2.5 mass (U.S. EPA, 2004b). Source apportionment modeling of PM2.5 mass concentrations from 24 Speciation Defense Coastal

  6. Novel techniques for characterization of hydrocarbon emission sources in the Barnett Shale

    NASA Astrophysics Data System (ADS)

    Nathan, Brian Joseph

    Changes in ambient atmospheric hydrocarbon concentrations can have both short-term and long-term effects on the atmosphere and on human health. Thus, accurate characterization of emissions sources is critically important. The recent boom in shale gas production has led to an increase in hydrocarbon emissions from associated processes, though the exact extent is uncertain. As an original quantification technique, a model airplane equipped with a specially-designed, open-path methane sensor was flown multiple times over a natural gas compressor station in the Barnett Shale in October 2013. A linear optimization was introduced to a standard Gaussian plume model in an effort to determine the most probable emission rate coming from the station. This is shown to be a suitable approach given an ideal source with a single, central plume. Separately, an analysis was performed to characterize the nonmethane hydrocarbons in the Barnett during the same period. Starting with ambient hourly concentration measurements of forty-six hydrocarbon species, Lagrangian air parcel trajectories were implemented in a meteorological model to extend the resolution of these measurements and achieve domain-fillings of the region for the period of interest. A self-organizing map (a type of unsupervised classification) was then utilized to reduce the dimensionality of the total multivariate set of grids into characteristic one-dimensional signatures. By also introducing a self-organizing map classification of the contemporary wind measurements, the spatial hydrocarbon characterizations are analyzed for periods with similar wind conditions. The accuracy of the classification is verified through assessment of observed spatial mixing ratio enhancements of key species, through site-comparisons with a related long-term study, and through a random forest analysis (an ensemble learning method of supervised classification) to determine the most important species for defining key classes. The hydrocarbon classification is shown to have performed very well in identifying expected signatures near and downwind-of oil and gas facilities with active permits, which showcases this method's usefulness for future regional hydrocarbon source-apportionment analyses.

  7. Mathematical models of Neospora caninum infection in dairy cattle: transmission and options for control.

    PubMed

    French, N P; Clancy, D; Davison, H C; Trees, A J

    1999-10-01

    The transmission and control of Neospora caninum infection in dairy cattle was examined using deterministic and stochastic models. Parameter estimates were derived from recent studies conducted in the UK and from the published literature. Three routes of transmission were considered: maternal vertical transmission with a high probability (0.95), horizontal transmission from infected cattle within the herd, and horizontal transmission from an independent external source. Putative infection via pooled colostrum was used as an example of within-herd horizontal transmission, and the recent finding that the dog is a definitive host of N. caninum supported the inclusion of an external independent source of infection. The predicted amount of horizontal transmission required to maintain infection at levels commonly observed in field studies in the UK and elsewhere, was consistent with that observed in studies of post-natal seroconversion (0.85-9.0 per 100 cow-years). A stochastic version of the model was used to simulate the spread of infection in herds of 100 cattle, with a mean infection prevalence similar to that observed in UK studies (around 20%). The distributions of infected and uninfected cattle corresponded closely to Normal distributions, with S.D.s of 6.3 and 7.0, respectively. Control measures were considered by altering birth, death and horizontal transmission parameters. A policy of annual culling of infected cattle very rapidly reduced the prevalence of infection, and was shown to be the most effective method of control in the short term. Not breeding replacements from infected cattle was also effective in the short term, particularly in herds with a higher turnover of cattle. However, the long-term effectiveness of these measures depended on the amount and source of horizontal infection. If the level of within-herd transmission was above a critical threshold, then a combination of reducing within-herd, and blocking external sources of transmission was required to permanently eliminate infection.

  8. Black Carbon and Sulfate Aerosols in the Arctic: Long-term Trends, Radiative Impacts, and Source Attributions

    NASA Astrophysics Data System (ADS)

    Wang, H.; Zhang, R.; Yang, Y.; Smith, S.; Rasch, P. J.

    2017-12-01

    The Arctic has warmed dramatically in recent decades. As one of the important short-lived climate forcers, aerosols affect the Arctic radiative budget directly by interfering radiation and indirectly by modifying clouds. Light-absorbing particles (e.g., black carbon) in snow/ice can reduce the surface albedo. The direct radiative impact of aerosols on the Arctic climate can be either warming or cooling, depending on their composition and location, which can further alter the poleward heat transport. Anthropogenic emissions, especially, BC and SO2, have changed drastically in low/mid-latitude source regions in the past few decades. Arctic surface observations at some locations show that BC and sulfate aerosols had a decreasing trend in the recent decades. In order to understand the impact of long-term emission changes on aerosols and their radiative effects, we use the Community Earth System Model (CESM) equipped with an explicit BC and sulfur source-tagging technique to quantify the source-receptor relationships and decadal trends of Arctic sulfate and BC and to identify variations in their atmospheric transport pathways from lower latitudes. The simulation was conducted for 36 years (1979-2014) with prescribed sea surface temperatures and sea ice concentrations. To minimize potential biases in modeled large-scale circulations, wind fields in the simulation are nudged toward an atmospheric reanalysis dataset, while atmospheric constituents including water vapor, clouds, and aerosols are allowed to evolve according to the model physics. Both anthropogenic and open fire emissions came from the newly released CMIP6 datasets, which show strong regional trends in BC and SO2 emissions during the simulation time period. Results show that emissions from East Asia and South Asia together have the largest contributions to Arctic sulfate and BC concentrations in the upper troposphere, which have an increasing trend. The strong decrease in emissions from Europe, Russia and North America contributed significantly to the overall decreasing trend in Arctic BC and sulfate, especially, in the lower troposphere. The long-term changes in the spatial distributions of aerosols, their radiative impacts and source attributions, along with implications for the Arctic warming trend, will be discussed.

  9. Modelling the effects of trade-offs between long and short-term objectives in fisheries management.

    PubMed

    Mardle, Simon; Pascoe, Sean

    2002-05-01

    Fisheries management is typically a complex problem, from both an environmental and political perspective. The main source of conflict occurs between the need for stock conservation and the need for fishing community well-being, which is typically measured by employment and income levels. For most fisheries, overexploitation of the stock requires a reduction in the level of fishing activity. While this may lead to long-term benefits (both conservation and economic), it also leads to a short-term reduction in employment and regional incomes. In regions which are heavily dependent on fisheries, short-term consequences of conservation efforts may be considerable. The relatively high degree of scientific uncertainty with respect to the status of the stocks and the relatively short lengths of political terms of office, generally give rise to the short-run view taking the highest priority when defining policy objectives. In this paper, a multi-objective model of the North Sea is developed that incorporates both long-term and short-term objectives. Optimal fleet sizes are estimated taking into consideration different preferences between the defined short-term and long-term objectives. The subsequent results from the model give the short-term and long-term equilibrium status of the fishery incorporating the effects of the short-term objectives. As would be expected, an optimal fleet from a short-term perspective is considerably larger than an optimal fleet from a long-run perspective. Conversely, stock sizes and sustainable yields are considerably lower in the long-term if a short-term perspective is used in setting management policies. The model results highlight what is essentially a principal-agent problem, with the objectives of the policy makers not necessarily reflecting the objectives of society as a whole.

  10. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  11. River Export of Plastic from Land to Sea: A Global Modeling Approach

    NASA Astrophysics Data System (ADS)

    Siegfried, Max; Gabbert, Silke; Koelmans, Albert A.; Kroeze, Carolien; Löhr, Ansje; Verburg, Charlotte

    2016-04-01

    Plastic is increasingly considered a serious cause of water pollution. It is a threat to aquatic ecosystems, including rivers, coastal waters and oceans. Rivers transport considerable amounts of plastic from land to sea. The quantity and its main sources, however, are not well known. Assessing the amount of macro- and microplastic transport from river to sea is, therefore, important for understanding the dimension and the patterns of plastic pollution of aquatic ecosystems. In addition, it is crucial for assessing short- and long-term impacts caused by plastic pollution. Here we present a global modelling approach to quantify river export of plastic from land to sea. Our approach accounts for different types of plastic, including both macro- and micro-plastics. Moreover, we distinguish point sources and diffuse sources of plastic in rivers. Our modelling approach is inspired by global nutrient models, which include more than 6000 river basins. In this paper, we will present our modelling approach, as well as first model results for micro-plastic pollution in European rivers. Important sources of micro-plastics include personal care products, laundry, household dust and car tyre wear. We combine information on these sources with information on sewage management, and plastic retention during river transport for the largest European rivers. Our modelling approach may help to better understand and prevent water pollution by plastic , and at the same time serves as 'proof of concept' for future application on global scale.

  12. A DCM study of spectral asymmetries in feedforward and feedback connections between visual areas V1 and V4 in the monkey.

    PubMed

    Bastos, A M; Litvak, V; Moran, R; Bosman, C A; Fries, P; Friston, K J

    2015-03-01

    This paper reports a dynamic causal modeling study of electrocorticographic (ECoG) data that addresses functional asymmetries between forward and backward connections in the visual cortical hierarchy. Specifically, we ask whether forward connections employ gamma-band frequencies, while backward connections preferentially use lower (beta-band) frequencies. We addressed this question by modeling empirical cross spectra using a neural mass model equipped with superficial and deep pyramidal cell populations-that model the source of forward and backward connections, respectively. This enabled us to reconstruct the transfer functions and associated spectra of specific subpopulations within cortical sources. We first established that Bayesian model comparison was able to discriminate between forward and backward connections, defined in terms of their cells of origin. We then confirmed that model selection was able to identify extrastriate (V4) sources as being hierarchically higher than early visual (V1) sources. Finally, an examination of the auto spectra and transfer functions associated with superficial and deep pyramidal cells confirmed that forward connections employed predominantly higher (gamma) frequencies, while backward connections were mediated by lower (alpha/beta) frequencies. We discuss these findings in relation to current views about alpha, beta, and gamma oscillations and predictive coding in the brain. Copyright © 2015. Published by Elsevier Inc.

  13. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.

  14. The long-term intensity behavior of Centaurus X-3

    NASA Technical Reports Server (NTRS)

    Schreier, E. J.; Swartz, K.; Giacconi, R.; Fabbiano, G.; Morin, J.

    1976-01-01

    In three years of observation, the X-ray source Cen X-3 appears to alternate between 'high states', with an intensity of 150 counts/s (2-6 keV) or greater, and 'low states', where the source is barely detectable. The time scale of this behavior is of the order of months, and no apparent periodicity has been observed. Analysis of two transitions between these states is reported. During two weeks in July 1972, the source increased from about 20 counts/s to 150 counts/s. The detailed nature of this turn-on is interpreted in terms of a model in which the supergiant's stellar wind decreases in density. A second transition, a turnoff in February 1973, is similarly analyzed and found to be consistent with a simple decrease in accretion rate. The presence of absorption dips during transitions at orbital phases 0.4-0.5 as well as at phase 0.75 is discussed. The data are consistent with a stellar-wind accretion model and with different kinds of extended lows caused by increased wind density masking the X-ray emission or by decreased wind density lowering the accretion rate.

  15. Short-term variability of mineral dust, metals and carbon emission from road dust resuspension

    NASA Astrophysics Data System (ADS)

    Amato, Fulvio; Schaap, Martijn; Denier van der Gon, Hugo A. C.; Pandolfi, Marco; Alastuey, Andrés; Keuken, Menno; Querol, Xavier

    2013-08-01

    Particulate matter (PM) pollution in cities has severe impact on morbidity and mortality of their population. In these cities, road dust resuspension contributes largely to PM and airborne heavy metals concentrations. However, the short-term variation of emission through resuspension is not well described in the air quality models, hampering a reliable description of air pollution and related health effects. In this study we experimentally show that the emission strength of resuspension varies widely among road dust components/sources. Our results offer the first experimental evidence of different emission rates for mineral dust, heavy metals and carbon fractions due to traffic-induced resuspension. Also, the same component (or source) recovers differently in a road in Barcelona (Spain) and a road in Utrecht (The Netherlands). This finding has important implications on atmospheric pollution modelling, mostly for mineral dust, heavy metals and carbon species. After rain events, recoveries were generally faster in Barcelona rather than in Utrecht. The largest difference was found for the mineral dust (Al, Si, Ca). Tyre wear particles (organic carbon and zinc) recovered faster than other road dust particles in both cities. The source apportionment of road dust mass provides useful information for air quality management.

  16. On volume-source representations based on the representation theorem

    NASA Astrophysics Data System (ADS)

    Ichihara, Mie; Kusakabe, Tetsuya; Kame, Nobuki; Kumagai, Hiroyuki

    2016-01-01

    We discuss different ways to characterize a moment tensor associated with an actual volume change of ΔV C , which has been represented in terms of either the stress glut or the corresponding stress-free volume change ΔV T . Eshelby's virtual operation provides a conceptual model relating ΔV C to ΔV T and the stress glut, where non-elastic processes such as phase transitions allow ΔV T to be introduced and subsequent elastic deformation of - ΔV T is assumed to produce the stress glut. While it is true that ΔV T correctly represents the moment tensor of an actual volume source with volume change ΔV C , an explanation as to why such an operation relating ΔV C to ΔV T exists has not previously been given. This study presents a comprehensive explanation of the relationship between ΔV C and ΔV T based on the representation theorem. The displacement field is represented using Green's function, which consists of two integrals over the source surface: one for displacement and the other for traction. Both integrals are necessary for representing volumetric sources, whereas the representation of seismic faults includes only the first term, as the second integral over the two adjacent fault surfaces, across which the traction balances, always vanishes. Therefore, in a seismological framework, the contribution from the second term should be included as an additional surface displacement. We show that the seismic moment tensor of a volume source is directly obtained from the actual state of the displacement and stress at the source without considering any virtual non-elastic operations. A purely mathematical procedure based on the representation theorem enables us to specify the additional imaginary displacement necessary for representing a volume source only by the displacement term, which links ΔV C to ΔV T . It also specifies the additional imaginary stress necessary for representing a moment tensor solely by the traction term, which gives the "stress glut." The imaginary displacement-stress approach clarifies the mathematical background to the classical theory.

  17. Source apportionment of speciated PM2.5 and non-parametric regressions of PM2.5 and PM(coarse) mass concentrations from Denver and Greeley, Colorado, and construction and evaluation of dichotomous filter samplers

    NASA Astrophysics Data System (ADS)

    Piedrahita, Ricardo A.

    The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut diameter inlet head allows us to collect the separated size fractions that the collocated TEOMs collect continuously. Chemical analysis of the filters will include inorganic ions, organic compounds, EC, OC, and biological analyses. Side by side testing showed the cut diameters were in agreement with each other, and with a well characterized virtual impactor lent to the group by the University of Southern California. Error propagation was performed and uncertainty results were similar to the observed standard deviations.

  18. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  19. On the Development of Spray Submodels Based on Droplet Size Moments

    NASA Astrophysics Data System (ADS)

    Beck, J. C.; Watkins, A. P.

    2002-11-01

    Hitherto, all polydisperse spray models have been based on discretising the liquid flow field into groups of equally sized droplets. The authors have recently developed a spray model that captures the full polydisperse nature of the spray flow without using droplet size classes (Beck, 2000, Ph.D thesis, UMIST; Beck and Watkins, 2001, Proc. R. Soc. London A). The parameters used to describe the distribution of droplet sizes are the moments of the droplet size distribution function. Transport equations are written for the two moments which represent the liquid mass and surface area, and two more moments representing the sum of drop radii and droplet number are approximated via use of a presumed distribution function, which is allowed to vary in space and time. The velocities to be used in the two transport equations are obtained by defining moment-average quantities and constructing further transport equations for the relevant moment-average velocities. An equation for the energy of the liquid phase and standard gas phase equations, including a k-ɛ turbulence model, are also solved. All the equations are solved in an Eulerian framework using the finite-volume approach, and the phases are coupled through source terms. Effects such as interphase drag, droplet breakup, and droplet-droplet collisions are also captured through the use of source terms. The development of the submodels to describe these effects is the subject of this paper. All the source terms for the hydrodynamics of the spray are derived in this paper in terms of the four moments of the droplet size distribution in order to find the net effect on the whole spray flow field. The development of similar submodels to describe heat and mass transfer effects between the phases is the subject of a further paper (Beck and Watkins, 2001, J. Heat Fluid Flow). The model has been applied to a wide variety of different sprays, including high-pressure diesel sprays, wide-angle solid-cone water sprays, hollow-cone spray s, and evaporating sprays. The comparisons of the results with experimental data show that the model performs well. The interphase drag model, along with the model for the turbulent dispersion of the liquid, produces excellent agreement in the spray penetration results, and the moment-average velocity approach gives good radial distributions of droplet size, showing the capability of the model to predict polydisperse behaviour. Good submodel performance results in droplet breakup, collisions, and evaporation effects (see (Beck and Watkins, 2001, J. Heat Fluid Flow)) also being captured successfully.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  1. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  2. Implementation of the Leaching Environmental Assessment Framework

    EPA Science Inventory

    New leaching tests are available in the U.S. for developing more accurate source terms for use in fate and transport models. For beneficial use or disposal, the use of the leaching environmental assessment framework (LEAF) will provide leaching results that reflect field condit...

  3. Toward a Redefinition of Sex and Gender.

    ERIC Educational Resources Information Center

    Unger, Rhoda Kesler

    1979-01-01

    Present psychological terminology facilitates biologically determinist models of sex differences, making it less likely that environmental sources of such differences will be explored. The term "gender," rather than "sex," should be used for those characteristics socioculturally considered appropriate to males and females.…

  4. Next Generation Air Measurements for Fugitive, Area Source, and Fence Line Applications

    EPA Science Inventory

    Next generation air measurements (NGAM) is an EPA term for the advancing field of air pollutant sensor technologies, data integration concepts, and geospatial modeling strategies. Ranging from personal sensors to satellite remote sensing, NGAM systems may provide revolutionary n...

  5. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  6. Feature extraction applied to agricultural crops as seen by LANDSAT

    NASA Technical Reports Server (NTRS)

    Kauth, R. J.; Lambeck, P. F.; Richardson, W.; Thomas, G. S.; Pentland, A. P. (Principal Investigator)

    1979-01-01

    The physical interpretation of the spectral-temporal structure of LANDSAT data can be conveniently described in terms of a graphic descriptive model called the Tassled Cap. This model has been a source of development not only in crop-related feature extraction, but also for data screening and for haze effects correction. Following its qualitative description and an indication of its applications, the model is used to analyze several feature extraction algorithms.

  7. An Evaluation of the Hazard Prediction and Assessment Capability (HPAC) Software’s Ability to Model the Chornobyl Accident

    DTIC Science & Technology

    2002-03-01

    source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data

  8. A kinetic model for the transport of electrons in a graphene layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fermanian Kammerer, Clotilde, E-mail: Clotilde.Fermanian@u-pec.fr; Méhats, Florian, E-mail: florian.mehats@univ-rennes1.fr

    In this article, we propose a new numerical scheme for the computation of the transport of electrons in a graphene device. The underlying quantum model for graphene is a massless Dirac equation, whose eigenvalues display a conical singularity responsible for non-adiabatic transitions between the two modes. We first derive a kinetic model which takes the form of two Boltzmann equations coupled by a collision operator modeling the non-adiabatic transitions. This collision term includes a Landau–Zener transfer term and a jump operator whose presence is essential in order to ensure a good energy conservation during the transitions. We propose an algorithmicmore » realization of the semi-group solving the kinetic model, by a particle method. We give analytic justification of the model and propose a series of numerical experiments studying the influences of the various sources of errors between the quantum and the kinetic models.« less

  9. Simulation results for a multirate mass transfer modell for immiscible displacement of two fluids in highly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Tecklenburg, Jan; Neuweiler, Insa; Dentz, Marco; Carrera, Jesus; Geiger, Sebastian

    2013-04-01

    Flow processes in geotechnical applications do often take place in highly heterogeneous porous media, such as fractured rock. Since, in this type of media, classical modelling approaches are problematic, flow and transport is often modelled using multi-continua approaches. From such approaches, multirate mass transfer models (mrmt) can be derived to describe the flow and transport in the "fast" or mobile zone of the medium. The porous media is then modeled with one mobile zone and multiple immobile zones, where the immobile zones are connected to the mobile zone by single rate mass transfer. We proceed from a mrmt model for immiscible displacement of two fluids, where the Buckley-Leverett equation is expanded by a sink-source-term which is nonlocal in time. This sink-source-term models exchange with an immobile zone with mass transfer driven by capillary diffusion. This nonlinear diffusive mass transfer can be approximated for particular imbibition or drainage cases by a linear process. We present a numerical scheme for this model together with simulation results for a single fracture test case. We solve the mrmt model with the finite volume method and explicit time integration. The sink-source-term is transformed to multiple single rate mass transfer processes, as shown by Carrera et. al. (1998), to make it local in time. With numerical simulations we studied immiscible displacement in a single fracture test case. To do this we calculated the flow parameters using information about the geometry and the integral solution for two phase flow by McWorther and Sunnada (1990). Comparision to the results of the full two dimensional two phase flow model by Flemisch et. al. (2011) show good similarities of the saturation breakthrough curves. Carrera, J., Sanchez-Vila, X., Benet, I., Medina, A., Galarza, G., and Guimera, J.: On matrix diffusion: formulations, solution methods and qualitative effects, Hydrogeology Journal, 6, 178-190, 1998. Flemisch, B., Darcis, M., Erbertseder, K., Faigle, B., Lauser, A. et al.: Dumux: Dune for multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media, Advances in Water Resources, 34, 1102-1112, 2011. McWhorter, D. B., and Sunada, D. K.: Exact integral solutions for two-phase flow, Water Resources Research, 26(3), 399-413, 1990.

  10. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  11. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  12. Towards resiliency with micro-grids: Portfolio optimization and investment under uncertainty

    NASA Astrophysics Data System (ADS)

    Gharieh, Kaveh

    Energy security and sustained supply of power are critical for community welfare and economic growth. In the face of the increased frequency and intensity of extreme weather conditions which can result in power grid outage, the value of micro-grids to improve the communities' power reliability and resiliency is becoming more important. Micro-grids capability to operate in islanded mode in stressed-out conditions, dramatically decreases the economic loss of critical infrastructure in power shortage occasions. More wide-spread participation of micro-grids in the wholesale energy market in near future, makes the development of new investment models necessary. However, market and price risks in short term and long term along with risk factors' impacts shall be taken into consideration in development of new investment models. This work proposes a set of models and tools to address different problems associated with micro-grid assets including optimal portfolio selection, investment and financing in both community and a sample critical infrastructure (i.e. wastewater treatment plant) levels. The models account for short-term operational volatilities and long-term market uncertainties. A number of analytical methodologies and financial concepts have been adopted to develop the aforementioned models as follows. (1) Capital budgeting planning and portfolio optimization models with Monte Carlo stochastic scenario generation are applied to derive the optimal investment decision for a portfolio of micro-grid assets considering risk factors and multiple sources of uncertainties. (2) Real Option theory, Monte Carlo simulation and stochastic optimization techniques are applied to obtain optimal modularized investment decisions for hydrogen tri-generation systems in wastewater treatment facilities, considering multiple sources of uncertainty. (3) Public Private Partnership (PPP) financing concept coupled with investment horizon approach are applied to estimate public and private parties' revenue shares from a community-level micro-grid project over the course of assets' lifetime considering their optimal operation under uncertainty.

  13. Hydrodynamic model for expansion and collisional relaxation of x-ray laser-excited multi-component nanoplasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saxena, Vikrant, E-mail: vikrant.saxena@desy.de; Hamburg Center for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg; Ziaja, Beata, E-mail: ziaja@mail.desy.de

    The irradiation of an atomic cluster with a femtosecond x-ray free-electron laser pulse results in a nanoplasma formation. This typically occurs within a few hundred femtoseconds. By this time the x-ray pulse is over, and the direct photoinduced processes no longer contributing. All created electrons within the nanoplasma are thermalized. The nanoplasma thus formed is a mixture of atoms, electrons, and ions of various charges. While expanding, it is undergoing electron impact ionization and three-body recombination. Below we present a hydrodynamic model to describe the dynamics of such multi-component nanoplasmas. The model equations are derived by taking the moments ofmore » the corresponding Boltzmann kinetic equations. We include the equations obtained, together with the source terms due to electron impact ionization and three-body recombination, in our hydrodynamic solver. Model predictions for a test case, expanding spherical Ar nanoplasma, are obtained. With this model, we complete the two-step approach to simulate x-ray created nanoplasmas, enabling computationally efficient simulations of their picosecond dynamics. Moreover, the hydrodynamic framework including collisional processes can be easily extended for other source terms and then applied to follow relaxation of any finite non-isothermal multi-component nanoplasma with its components relaxed into local thermodynamic equilibrium.« less

  14. Confronting effective models for deconfinement in dense quark matter with lattice data

    NASA Astrophysics Data System (ADS)

    Andersen, Jens O.; Brauner, Tomáš; Naylor, William R.

    2015-12-01

    Ab initio numerical simulations of the thermodynamics of dense quark matter remain a challenge. Apart from the infamous sign problem, lattice methods have to deal with finite volume and discretization effects as well as with the necessity to introduce sources for symmetry-breaking order parameters. We study these artifacts in the Polyakov-loop-extended Nambu-Jona-Lasinio (PNJL) model and compare its predictions to existing lattice data for cold and dense two-color matter with two flavors of Wilson quarks. To achieve even qualitative agreement with lattice data requires the introduction of two novel elements in the model: (i) explicit chiral symmetry breaking in the effective contact four-fermion interaction, referred to as the chiral twist, and (ii) renormalization of the Polyakov loop. The feedback of the dense medium to the gauge sector is modeled by a chemical-potential-dependent scale in the Polyakov-loop potential. In contrast to previously used analytical Ansätze, we determine its dependence on the chemical potential from lattice data for the expectation value of the Polyakov loop. Finally, we propose adding a two-derivative operator to our effective model. This term acts as an additional source of explicit chiral symmetry breaking, mimicking an analogous term in the lattice Wilson action.

  15. Macroscopic modeling for heat and water vapor transfer in dry snow by homogenization.

    PubMed

    Calonne, Neige; Geindreau, Christian; Flin, Frédéric

    2014-11-26

    Dry snow metamorphism, involved in several topics related to cryospheric sciences, is mainly linked to heat and water vapor transfers through snow including sublimation and deposition at the ice-pore interface. In this paper, the macroscopic equivalent modeling of heat and water vapor transfers through a snow layer was derived from the physics at the pore scale using the homogenization of multiple scale expansions. The microscopic phenomena under consideration are heat conduction, vapor diffusion, sublimation, and deposition. The obtained macroscopic equivalent model is described by two coupled transient diffusion equations including a source term arising from phase change at the pore scale. By dimensional analysis, it was shown that the influence of such source terms on the overall transfers can generally not be neglected, except typically under small temperature gradients. The precision and the robustness of the proposed macroscopic modeling were illustrated through 2D numerical simulations. Finally, the effective vapor diffusion tensor arising in the macroscopic modeling was computed on 3D images of snow. The self-consistent formula offers a good estimate of the effective diffusion coefficient with respect to the snow density, within an average relative error of 10%. Our results confirm recent work that the effective vapor diffusion is not enhanced in snow.

  16. Emergent Constraints for Cloud Feedbacks and Climate Sensitivity

    DOE PAGES

    Klein, Stephen A.; Hall, Alex

    2015-10-26

    Emergent constraints are physically explainable empirical relationships between characteristics of the current climate and long-term climate prediction that emerge in collections of climate model simulations. With the prospect of constraining long-term climate prediction, scientists have recently uncovered several emergent constraints related to long-term cloud feedbacks. We review these proposed emergent constraints, many of which involve the behavior of low-level clouds, and discuss criteria to assess their credibility. With further research, some of the cases we review may eventually become confirmed emergent constraints, provided they are accompanied by credible physical explanations. Because confirmed emergent constraints identify a source of model errormore » that projects onto climate predictions, they deserve extra attention from those developing climate models and climate observations. While a systematic bias cannot be ruled out, it is noteworthy that the promising emergent constraints suggest larger cloud feedback and hence climate sensitivity.« less

  17. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  18. Open Source Paradigm: A Synopsis of The Cathedral and the Bazaar for Health and Social Care.

    PubMed

    Benson, Tim

    2016-07-04

    Open source software (OSS) is becoming more fashionable in health and social care, although the ideas are not new. However progress has been slower than many had expected. The purpose is to summarise the Free/Libre Open Source Software (FLOSS) paradigm in terms of what it is, how it impacts users and software engineers and how it can work as a business model in health and social care sectors. Much of this paper is a synopsis of Eric Raymond's seminal book The Cathedral and the Bazaar, which was the first comprehensive description of the open source ecosystem, set out in three long essays. Direct quotes from the book are used liberally, without reference to specific passages. The first part contrasts open and closed source approaches to software development and support. The second part describes the culture and practices of the open source movement. The third part considers business models. A key benefit of open source is that users can access and collaborate on improving the software if they wish. Closed source code may be regarded as a strategic business risk that that may be unacceptable if there is an open source alternative. The sharing culture of the open source movement fits well with that of health and social care.

  19. High-resolution observations of low-luminosity gigahertz-peaked spectrum and compact steep-spectrum sources

    NASA Astrophysics Data System (ADS)

    Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.

    2018-06-01

    We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.

  20. Developing a comprehensive time series of GDP per capita for 210 countries from 1950 to 2015

    PubMed Central

    2012-01-01

    Background Income has been extensively studied and utilized as a determinant of health. There are several sources of income expressed as gross domestic product (GDP) per capita, but there are no time series that are complete for the years between 1950 and 2015 for the 210 countries for which data exist. It is in the interest of population health research to establish a global time series that is complete from 1950 to 2015. Methods We collected GDP per capita estimates expressed in either constant US dollar terms or international dollar terms (corrected for purchasing power parity) from seven sources. We applied several stages of models, including ordinary least-squares regressions and mixed effects models, to complete each of the seven source series from 1950 to 2015. The three US dollar and four international dollar series were each averaged to produce two new GDP per capita series. Results and discussion Nine complete series from 1950 to 2015 for 210 countries are available for use. These series can serve various analytical purposes and can illustrate myriad economic trends and features. The derivation of the two new series allows for researchers to avoid any series-specific biases that may exist. The modeling approach used is flexible and will allow for yearly updating as new estimates are produced by the source series. Conclusion GDP per capita is a necessary tool in population health research, and our development and implementation of a new method has allowed for the most comprehensive known time series to date. PMID:22846561

  1. Detection of a Moving Gas Source and Estimation of its Concentration Field with a Sensing Aerial Vehicle Integration of Theoretical Controls and Computational Fluids

    DTIC Science & Technology

    2016-07-21

    constants. The model (2.42) is popular for simulation of the UAV motion [60], [61], [62] due to the fact that it models the aircraft response to...inputs to the dynamic model (2.42). The concentration sensors onboard the UAV record concentration ( simulated ) data according to its spatial location...vehicle dynamics and guidance, and the onboard sensor modeling . 15. SUBJECT TERMS State estimation; UAVs , mobile sensors; grid adaptationj; plume

  2. BETR Global - A geographically explicit global-scale multimedia contaminant fate model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macleod, M.; Waldow, H. von; Tay, P.

    2011-04-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).

  3. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  4. Effort-reward imbalance and its association with health among permanent and fixed-term workers

    PubMed Central

    2010-01-01

    Background In the past decade, the changing labor market seems to have rejected the traditional standards employment and has begun to support a variety of non-standard forms of work in their place. The purpose of our study was to compare the degree of job stress, sources of job stress, and association of high job stress with health among permanent and fixed-term workers. Methods Our study subjects were 709 male workers aged 30 to 49 years in a suburb of Tokyo, Japan. In 2008, we conducted a cross-sectional study to compare job stress using an effort-reward imbalance (ERI) model questionnaire. Lifestyles, subjective symptoms, and body mass index were also observed from the 2008 health check-up data. Results The rate of job stress of the high-risk group measured by ERI questionnaire was not different between permanent and fixed-term workers. However, the content of the ERI components differed. Permanent workers were distressed more by effort, overwork, or job demand, while fixed-term workers were distressed more by their job insecurity. Moreover, higher ERI was associated with existence of subjective symptoms (OR = 2.07, 95% CI: 1.42-3.03) and obesity (OR = 2.84, 95% CI:1.78-4.53) in fixed-term workers while this tendency was not found in permanent workers. Conclusions Our study showed that workers with different employment types, permanent and fixed-term, have dissimilar sources of job stress even though their degree of job stress seems to be the same. High ERI was associated with existing subjective symptoms and obesity in fixed-term workers. Therefore, understanding different sources of job stress and their association with health among permanent and fixed-term workers should be considered to prevent further health problems. PMID:21054838

  5. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  6. Research on Geo-information Data Model for Preselected Areas of Geological Disposal of High-level Radioactive Waste

    NASA Astrophysics Data System (ADS)

    Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.

    2016-11-01

    The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.

  7. Towards A Topological Framework for Integrating Semantic Information Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Hogan, Emilie A.; Robinson, Michael

    2014-09-07

    In this position paper we argue for the role that topological modeling principles can play in providing a framework for sensor integration. While used successfully in standard (quantitative) sensors, we are developing this methodology in new directions to make it appropriate specifically for semantic information sources, including keyterms, ontology terms, and other general Boolean, categorical, ordinal, and partially-ordered data types. We illustrate the basics of the methodology in an extended use case/example, and discuss path forward.

  8. PDF-ECG in clinical practice: A model for long-term preservation of digital 12-lead ECG data.

    PubMed

    Sassi, Roberto; Bond, Raymond R; Cairns, Andrew; Finlay, Dewar D; Guldenring, Daniel; Libretti, Guido; Isola, Lamberto; Vaglio, Martino; Poeta, Roberto; Campana, Marco; Cuccia, Claudio; Badilini, Fabio

    In clinical practice, data archiving of resting 12-lead electrocardiograms (ECGs) is mainly achieved by storing a PDF report in the hospital electronic health record (EHR). When available, digital ECG source data (raw samples) are only retained within the ECG management system. The widespread availability of the ECG source data would undoubtedly permit successive analysis and facilitate longitudinal studies, with both scientific and diagnostic benefits. PDF-ECG is a hybrid archival format which allows to store in the same file both the standard graphical report of an ECG together with its source ECG data (waveforms). Using PDF-ECG as a model to address the challenge of ECG data portability, long-term archiving and documentation, a real-world proof-of-concept test was conducted in a northern Italy hospital. A set of volunteers undertook a basic ECG using routine hospital equipment and the source data captured. Using dedicated web services, PDF-ECG documents were then generated and seamlessly uploaded in the hospital EHR, replacing the standard PDF reports automatically generated at the time of acquisition. Finally, the PDF-ECG files could be successfully retrieved and re-analyzed. Adding PDF-ECG to an existing EHR had a minimal impact on the hospital's workflow, while preserving the ECG digital data. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Low Reynolds number k-epsilon modelling with the aid of direct simulation data

    NASA Technical Reports Server (NTRS)

    Rodi, W.; Mansour, N. N.

    1993-01-01

    The constant C sub mu and the near-wall damping function f sub mu in the eddy-viscosity relation of the k-epsilon model are evaluated from direct numerical simulation (DNS) data for developed channel and boundary layer flow at two Reynolds numbers each. Various existing f sub mu model functions are compared with the DNS data, and a new function is fitted to the high-Reynolds-number channel flow data. The epsilon-budget is computed for the fully developed channel flow. The relative magnitude of the terms in the epsilon-equation is analyzed with the aid of scaling arguments, and the parameter governing this magnitude is established. Models for the sum of all source and sink terms in the epsilon-equation are tested against the DNS data, and an improved model is proposed.

  10. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    PubMed

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.

    PubMed

    Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei

    2018-03-01

    Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.

  12. PyFLOWGO: An open-source platform for simulation of channelized lava thermo-rheological properties

    NASA Astrophysics Data System (ADS)

    Chevrel, Magdalena Oryaëlle; Labroquère, Jérémie; Harris, Andrew J. L.; Rowland, Scott K.

    2018-02-01

    Lava flow advance can be modeled through tracking the evolution of the thermo-rheological properties of a control volume of lava as it cools and crystallizes. An example of such a model was conceived by Harris and Rowland (2001) who developed a 1-D model, FLOWGO, in which the velocity of a control volume flowing down a channel depends on rheological properties computed following the thermal path estimated via a heat balance box model. We provide here an updated version of FLOWGO written in Python that is an open-source, modern and flexible language. Our software, named PyFLOWGO, allows selection of heat fluxes and rheological models of the user's choice to simulate the thermo-rheological evolution of the lava control volume. We describe its architecture which offers more flexibility while reducing the risk of making error when changing models in comparison to the previous FLOWGO version. Three cases are tested using actual data from channel-fed lava flow systems and results are discussed in terms of model validation and convergence. PyFLOWGO is open-source and packaged in a Python library to be imported and reused in any Python program (https://github.com/pyflowgo/pyflowgo)

  13. I{ Relationship between source clean up and mass flux of chlorinated solvents in low permeability settings with fractures}

    NASA Astrophysics Data System (ADS)

    Bjerg, P. L.; Chambon, J. C.; Christiansen, C. M.; Broholm, M. M.; Binning, P. J.

    2009-04-01

    Groundwater contamination by chlorinated solvents, such as perchloroethylene (PCE), often occurs via leaching from complex sources located in low permeability sediments such as clayey tills overlying aquifers. Clayey tills are mostly fractured, and contamination migrating through the fractures spreads to the low permeability matrix by diffusion. This results in a long term source of contamination due to back-diffusion. Leaching from such sources is further complicated by microbial degradation under anaerobic conditions to sequentially form the daughter products trichloroethylene, cis-dichloroethylene (cis-DCE), vinyl chloride (VC) and ethene. This process can be enhanced by addition of electron donors and/or bioaugmentation and is termed Enhanced Reductive Dechlorination (ERD). This work aims to improve our understanding of the physical, chemical and microbial processes governing source behaviour under natural and enhanced conditions. That understanding is applied to risk assessment, and to determine the relationship and time frames of source clean up and plume response. To meet that aim, field and laboratory observations are coupled to state of the art models incorporating new insights of contaminant behaviour. The long term leaching of chlorinated ethenes from clay aquitards is currently being monitored at a number of Danish sites. The observed data is simulated using a coupled fracture flow and clay matrix diffusion model. Sequential degradation is represented by modified Monod kinetics accounting for competitive inhibition between the chlorinated ethenes. The model is constructed using Comsol Multiphysics, a generic finite- element partial differential equation solver. The model is applied at well characterised field sites with respect to hydrogeology, fracture network, contaminant distribution and microbial processes (lab and field experiments). At one of the study sites (Sortebrovej), the source areas are situated in a clayey till with fractures and interbedded sand lenses. The site is highly contaminated with chlorinated ethenes which impact the underlying sand aquifer. Full scale remediation using ERD was implemented at Sortebrovej in 2006. Anaerobic dechlorination is taking place, and cis-DCE and VC have been found in significant amounts in monitoring wells and to some degree in sediment cores representing the the clayey till matrix. Model results reveal several interesting findings. The physical processes of matrix diffusion and advection in the fractures seem to be more important than the microbial degradation processes for estimation of the time frames and the distance between fractures is amongst the most sensitive model parameters. However, the inclusion of sequential degradation is crucial to determining the composition of contamination leaching into the underlying aquifer. Degradation products like VC will peak at an earlier stage compared to the mother compound due to a higher mobility. These model results are supported by actual findings at the Sortebrovej site. The findings highlight a need for improved characterization of low permeability aquitards lying above aquifers used for water supply. The fracture network in aquitards is currently poorly described at larger depths (below 5-8 m) and the effect of sand lenses on leaching behaviour is not well understood. The microbial processes are assumed to be taking place in the fracture system, but the interaction with and processes in the matrix need to be further explored. Development of new methods for field site characterisation and integrated field and model expertise are crucial for the design of remedial actions and for risk assessment of contaminated sites in low permeability settings.

  14. Evaluation of the source area of rooftop scalar measurements in London, UK using wind tunnel and modelling approaches.

    NASA Astrophysics Data System (ADS)

    Brocklehurst, Aidan; Boon, Alex; Barlow, Janet; Hayden, Paul; Robins, Alan

    2014-05-01

    The source area of an instrument is an estimate of the area of ground over which the measurement is generated. Quantification of the source area of a measurement site provides crucial context for analysis and interpretation of the data. A range of computational models exists to calculate the source area of an instrument, but these are usually based on assumptions which do not hold for instruments positioned very close to the surface, particularly those surrounded by heterogeneous terrain i.e. urban areas. Although positioning instrumentation at higher elevation (i.e. on masts) is ideal in urban areas, this can be costly in terms of installation and maintenance costs and logistically difficult to position instruments in the ideal geographical location. Therefore, in many studies, experimentalists turn to rooftops to position instrumentation. Experimental validations of source area models for these situations are very limited. In this study, a controlled tracer gas experiment was conducted in a wind tunnel based on a 1:200 scale model of a measurement site used in previous experimental work in central London. The detector was set at the location of the rooftop site as the tracer was released at a range of locations within the surrounding streets and rooftops. Concentration measurements are presented for a range of wind angles, with the spread of concentration measurements indicative of the source area distribution. Clear evidence of wind channeling by streets is seen with the shape of the source area strongly influenced by buildings upwind of the measurement point. The results of the wind tunnel study are compared to scalar concentration source areas generated by modelling approaches based on meteorological data from the central London experimental site and used in the interpretation of continuous carbon dioxide (CO2) concentration data. Initial conclusions will be drawn as to how to apply scalar concentration source area models to rooftop measurement sites and suggestions for their improvement to incorporate effects such as channeling.

  15. Evaluation of an 18-year CMAQ simulation: Seasonal variations and long-term temporal changes in sulfate and nitrate

    NASA Astrophysics Data System (ADS)

    Civerolo, Kevin; Hogrefe, Christian; Zalewsky, Eric; Hao, Winston; Sistla, Gopal; Lynn, Barry; Rosenzweig, Cynthia; Kinney, Patrick L.

    2010-10-01

    This paper compares spatial and seasonal variations and temporal trends in modeled and measured concentrations of sulfur and nitrogen compounds in wet and dry deposition over an 18-year period (1988-2005) over a portion of the northeastern United States. Substantial emissions reduction programs occurred over this time period, including Title IV of the Clean Air Act Amendments of 1990 which primarily resulted in large decreases in sulfur dioxide (SO 2) emissions by 1995, and nitrogen oxide (NO x) trading programs which resulted in large decreases in warm season NO x emissions by 2004. Additionally, NO x emissions from mobile sources declined more gradually over this period. The results presented here illustrate the use of both operational and dynamic model evaluation and suggest that the modeling system largely captures the seasonal and long-term changes in sulfur compounds. The modeling system generally captures the long-term trends in nitrogen compounds, but does not reproduce the average seasonal variation or spatial patterns in nitrate.

  16. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  17. Environmental impact and risk assessments and key factors contributing to the overall uncertainties.

    PubMed

    Salbu, Brit

    2016-01-01

    There is a significant number of nuclear and radiological sources that have contributed, are still contributing, or have the potential to contribute to radioactive contamination of the environment in the future. To protect the environment from radioactive contamination, impact and risk assessments are performed prior to or during a release event, short or long term after deposition or prior and after implementation of countermeasures. When environmental impact and risks are assessed, however, a series of factors will contribute to the overall uncertainties. To provide environmental impact and risk assessments, information on processes, kinetics and a series of input variables is needed. Adding problems such as variability, questionable assumptions, gaps in knowledge, extrapolations and poor conceptual model structures, a series of factors are contributing to large and often unacceptable uncertainties in impact and risk assessments. Information on the source term and the release scenario is an essential starting point in impact and risk models; the source determines activity concentrations and atom ratios of radionuclides released, while the release scenario determine the physico-chemical forms of released radionuclides such as particle size distribution, structure and density. Releases will most often contain other contaminants such as metals, and due to interactions, contaminated sites should be assessed as a multiple stressor scenario. Following deposition, a series of stressors, interactions and processes will influence the ecosystem transfer of radionuclide species and thereby influence biological uptake (toxicokinetics) and responses (toxicodynamics) in exposed organisms. Due to the variety of biological species, extrapolation is frequently needed to fill gaps in knowledge e.g., from effects to no effects, from effects in one organism to others, from one stressor to mixtures. Most toxtests are, however, performed as short term exposure of adult organisms, ignoring sensitive history life stages of organisms and transgenerational effects. To link sources, ecosystem transfer and biological effects to future impact and risks, a series of models are usually interfaced, while uncertainty estimates are seldom given. The model predictions are, however, only valid within the boundaries of the overall uncertainties. Furthermore, the model predictions are only useful and relevant when uncertainties are estimated, communicated and understood. Among key factors contributing most to uncertainties, the present paper focuses especially on structure uncertainties (model bias or discrepancies) as aspects such as particle releases, ecosystem dynamics, mixed exposure, sensitive life history stages and transgenerational effects, are usually ignored in assessment models. Research focus on these aspects should significantly reduce the overall uncertainties in the impact and risk assessment of radioactive contaminated ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. EVALUATING THE POTENTIAL FOR CHLORINATED SOLVENT DEGRADATION FROM HYDROGEN CONCENTRATIONS

    EPA Science Inventory

    Long-term monitoring of a large trichioroethylene (TCE) and 1,1,1-trichloroethane (TCA) ground water plume in Minnesota indicated that these contaminants attenuated with distance from the source. Mathematical modelling indicated that sufficient time had passed for the plume to fu...

  19. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  20. Quantifying Transmission of Clostridium difficile within and outside Healthcare Settings

    PubMed Central

    Olsen, Margaret A.; Dubberke, Erik R.; Galvani, Alison P.; Townsend, Jeffrey P.

    2016-01-01

    To quantify the effect of hospital and community-based transmission and control measures on Clostridium difficile infection (CDI), we constructed a transmission model within and between hospital, community, and long-term care-facility settings. By parameterizing the model from national databases and calibrating it to C. difficile prevalence and CDI incidence, we found that hospitalized patients with CDI transmit C. difficile at a rate 15 (95% CI 7.2–32) times that of asymptomatic patients. Long-term care facility residents transmit at a rate of 27% (95% CI 13%–51%) that of hospitalized patients, and persons in the community at a rate of 0.1% (95% CI 0.062%–0.2%) that of hospitalized patients. Despite lower transmission rates for asymptomatic carriers and community sources, these transmission routes have a substantial effect on hospital-onset CDI because of the larger reservoir of hospitalized carriers and persons in the community. Asymptomatic carriers and community sources should be accounted for when designing and evaluating control interventions. PMID:26982504

  1. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  2. Slicer Method Comparison Using Open-source 3D Printer

    NASA Astrophysics Data System (ADS)

    Ariffin, M. K. A. Mohd; Sukindar, N. A.; Baharudin, B. T. H. T.; Jaafar, C. N. A.; Ismail, M. I. S.

    2018-01-01

    Open-source 3D printer has been one of the popular choices in fabricating 3D models. This technology is easily accessible and low in cost. However, several studies have been made to improve the performance of this low-cost technology in term of the accuracy of the parts finish. This study is focusing on the selection of slicer mode between CuraEngine and Slic3r. The effect on this slicer has been observe in terms of accuracy and surface visualization. The result shows that if the accuracy is the top priority, CuraEngine is the better option to use as contribute more accuracy as well as less filament is needed compared to the Slice3r. Slice3r may be very useful for complicated parts such as hanging structure due to excessive material which act as support material. The study provides basic platform for the user to have an idea which option to be used in fabricating 3D model.

  3. Airport-Noise Levels and Annoyance Model (ALAMO) system's reference manual

    NASA Technical Reports Server (NTRS)

    Deloach, R.; Donaldson, J. L.; Johnson, M. J.

    1986-01-01

    The airport-noise levels and annoyance model (ALAMO) is described in terms of the constituent modules, the execution of ALAMO procedure files, necessary for system execution, and the source code documentation associated with code development at Langley Research Center. The modules constituting ALAMO are presented both in flow graph form, and through a description of the subroutines and functions that comprise them.

  4. Cosmological implications of scalar field dark energy models in f(T,𝒯 ) gravity

    NASA Astrophysics Data System (ADS)

    Salako, Ines G.; Jawad, Abdul; Moradpour, Hooman

    After reviewing the f(T,𝒯 ) gravity, in which T is the torsion scalar and 𝒯 is the trace of the energy-momentum tensor, we refer to two cosmological models of this theory in agreement with observational data. Thereinafter, we consider a flat Friedmann-Robertson-Walker (FRW) universe filled by a pressureless source and look at the terms other than the Einstein terms in the corresponding Friedmann equations, as the dark energy (DE) candidate. In addition, some cosmological features of models, including equation of states and deceleration parameters, are addressed helping us in getting the accelerated expansion of the universe in quintessence era. Finally, we extract the scalar field as well as potential of quintessence, tachyon, K-essence and dilatonic fields for both f(T,𝒯 ) models. It is observed that the dynamics of scalar field as well as the scalar potential of these models indicate an accelerated expanding universe in these models.

  5. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    NASA Astrophysics Data System (ADS)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  6. The importance of quadrupole sources in prediction of transonic tip speed propeller noise

    NASA Technical Reports Server (NTRS)

    Hanson, D. B.; Fink, M. R.

    1978-01-01

    A theoretical analysis is presented for the harmonic noise of high speed, open rotors. Far field acoustic radiation equations based on the Ffowcs-Williams/Hawkings theory are derived for a static rotor with thin blades and zero lift. Near the plane of rotation, the dominant sources are the volume displacement and the rho U(2) quadrupole, where u is the disturbance velocity component in the direction blade motion. These sources are compared in both the time domain and the frequency domain using two dimensional airfoil theories valid in the subsonic, transonic, and supersonic speed ranges. For nonlifting parabolic arc blades, the two sources are equally important at speeds between the section critical Mach number and a Mach number of one. However, for moderately subsonic or fully supersonic flow over thin blade sections, the quadrupole term is negligible. It is concluded for thin blades that significant quadrupole noise radiation is strictly a transonic phenomenon and that it can be suppressed with blade sweep. Noise calculations are presented for two rotors, one simulating a helicopter main rotor and the other a model propeller. For the latter, agreement with test data was substantially improved by including the quadrupole source term.

  7. Identifying fine sediment sources to alleviate flood risk caused by fine sediments through catchment connectivity analysis

    NASA Astrophysics Data System (ADS)

    Twohig, Sarah; Pattison, Ian; Sander, Graham

    2017-04-01

    Fine sediment poses a significant threat to UK river systems in terms of vegetation, aquatic habitats and morphology. Deposition of fine sediment onto the river bed reduces channel capacity resulting in decreased volume to contain high flow events. Once the in channel problem has been identified managers are under pressure to sustainably mitigate flood risk. With climate change and land use adaptations increasing future pressures on river catchments it is important to consider the connectivity of fine sediment throughout the river catchment and its influence on channel capacity, particularly in systems experiencing long term aggradation. Fine sediment erosion is a continuing concern in the River Eye, Leicestershire. The predominately rural catchment has a history of flooding within the town of Melton Mowbray. Fine sediment from agricultural fields has been identified as a major contributor of sediment delivery into the channel. Current mitigation measures are not sustainable or successful in preventing the continuum of sediment throughout the catchment. Identifying the potential sources and connections of fine sediment would provide insight into targeted catchment management. 'Sensitive Catchment Integrated Modelling Analysis Platforms' (SCIMAP) is a tool often used by UK catchment managers to identify potential sources and routes of sediment within a catchment. SCIMAP is a risk based model that combines hydrological (rainfall) and geomorphic controls (slope, land cover) to identify the risk of fine sediment being transported from source into the channel. A desktop version of SCIMAP was run for the River Eye at a catchment scale using 5m terrain, rainfall and land cover data. A series of SCIMAP model runs were conducted changing individual parameters to determine the sensitivity of the model. Climate Change prediction data for the catchment was used to identify potential areas of future connectivity and erosion risk for catchment managers. The results have been subjected to field validation as part of a wider research project which provides an indication of the robustness of widespread models as effective management tools.

  8. #nowplaying Madonna: a large-scale evaluation on estimating similarities between music artists and between movies from microblogs.

    PubMed

    Schedl, Markus

    2012-01-01

    Different term weighting techniques such as [Formula: see text] or BM25 have been used intensely for manifold text-based information retrieval tasks. Their use for modeling term profiles for named entities and subsequent calculation of similarities between these named entities have been studied to a much smaller extent. The recent trend of microblogging made available massive amounts of information about almost every topic around the world. Therefore, microblogs represent a valuable source for text-based named entity modeling. In this paper, we present a systematic and comprehensive evaluation of different term weighting measures , normalization techniques , query schemes , index term sets , and similarity functions for the task of inferring similarities between named entities, based on data extracted from microblog posts . We analyze several thousand combinations of choices for the above mentioned dimensions, which influence the similarity calculation process, and we investigate in which way they impact the quality of the similarity estimates. Evaluation is performed using three real-world data sets: two collections of microblogs related to music artists and one related to movies. For the music collections, we present results of genre classification experiments using as benchmark genre information from allmusic.com. For the movie collection, we present results of multi-class classification experiments using as benchmark categories from IMDb. We show that microblogs can indeed be exploited to model named entity similarity with remarkable accuracy, provided the correct settings for the analyzed aspects are used. We further compare the results to those obtained when using Web pages as data source.

  9. Relevance analysis and short-term prediction of PM2.5 concentrations in Beijing based on multi-source data

    NASA Astrophysics Data System (ADS)

    Ni, X. Y.; Huang, H.; Du, W. P.

    2017-02-01

    The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.

  10. Spinoza's error: memory for truth and falsity.

    PubMed

    Nadarevic, Lena; Erdfelder, Edgar

    2013-02-01

    Two theoretical frameworks have been proposed to account for the representation of truth and falsity in human memory: the Cartesian model and the Spinozan model. Both models presume that during information processing a mental representation of the information is stored along with a tag indicating its truth value. However, the two models disagree on the nature of these tags. According to the Cartesian model, true information receives a "true" tag and false information receives a "false" tag. In contrast, the Spinozan model claims that only false information receives a "false" tag, whereas untagged information is automatically accepted as true. To test the Cartesian and Spinozan models, we conducted two source memory experiments in which participants studied true and false trivia statements from three different sources differing in credibility (i.e., presenting 100% true, 50% true and 50% false, or 100% false statements). In Experiment 1, half of the participants were informed about the source credibility prior to the study phase. As compared to a control group, this precue group showed improved source memory for both true and false statements, but not for statements with an uncertain validity status. Moreover, memory did not differ for truth and falsity in the precue group. As Experiment 2 revealed, this finding is replicated even when using a 1-week rather than a 20-min retention interval between study and test phases. The results of both experiments clearly contradict the Spinozan model but can be explained in terms of the Cartesian model.

  11. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  12. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    NASA Astrophysics Data System (ADS)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.

  13. Assessment of macroseismic intensity in the Nile basin, Egypt

    NASA Astrophysics Data System (ADS)

    Fergany, Elsayed

    2018-01-01

    This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.

  14. Outer heliospheric radio emissions. II - Foreshock source models

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Kurth, William S.; Gurnett, Donald A.

    1992-01-01

    Observations of LF radio emissions in the range 2-3 kHz by the Voyager spacecraft during the intervals 1983-1987 and 1989 to the present while at heliocentric distances greater than 11 AU are reported. New analyses of the wave data are presented, and the characteristics of the radiation are reviewed and discussed. Two classes of events are distinguished: transient events with varying starting frequencies that drift upward in frequency and a relatively continuous component that remains near 2 kHz. Evidence for multiple transient sources and for extension of the 2-kHz component above the 2.4-kHz interference signal is presented. The transient emissions are interpreted in terms of radiation generated at multiples of the plasma frequency when solar wind density enhancements enter one or more regions of a foreshock sunward of the inner heliospheric shock. Solar wind density enhancements by factors of 4-10 are observed. Propagation effects, the number of radiation sources, and the time variability, frequency drift, and varying starting frequencies of the transient events are discussed in terms of foreshock sources.

  15. Development of atmospheric N2O isotopomers model based on a chemistry-coupled atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.

    2009-12-01

    It is well known that isotopic information is useful to qualitatively understand cycles and constrain sources of some atmospheric species, but so far there has been no study to model N2O isotopomers throughout the atmosphere from the troposphere to the stratosphere, including realistic surface N2O isotopomers emissions. We have started to develop a model to simulate spatiotemporal variations of the atmospheric N2O isotopomers in both the troposphere and the stratosphere, based on a chemistry-coupled atmospheric general circulation model, in order to obtain more accurate quantitative understanding of the global N2O cycle. For surface emissions of the isotopomers, combination of EDGAR-based anthropogenic and soil fluxes and monthly varying GEIA oceanic fluxes are factored, using isotopic values of global total sources estimated from firn-air analyses based long-term trend of the atmospheric N2O isotopomers. Isotopic fractionations in chemical reactions are considered for photolysis and photo-oxidation of N2O in the stratosphere. The isotopic fractionation coefficients have been employed from studies based on laboratory experiments, but we also will test the coefficients determined by theoretical calculations. In terms of the global N2O isotopomer budgets, precise quantification of the sources is quite challenging, because even the spatiotemporal variabilities of N2O sources have never been adequately estimated. Therefore, we have firstly started validation of simulated isotopomer results in the stratosphere, by using the isotopomer profiles obtained by balloon observations. N2O concentration profiles are mostly well reproduced, partly because of realistic reproduction of dynamical processes by nudging with reanalysis meteorological data. However, the concentration in the polar vortex tends to be overestimated, probably due to relatively coarse wave-length resolution in photolysis calculation. Such model features also appear in the isotopomers results, which are almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.

  16. A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma

    NASA Technical Reports Server (NTRS)

    Scott, Carl D.

    1993-01-01

    This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.

  17. Hydropower Modeling Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Brady; Andrade, Juan; Cohen, Stuart

    Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less

  18. The effect of nonlinear propagation on heating of tissue: A numerical model of diagnostic ultrasound beams

    NASA Astrophysics Data System (ADS)

    Cahill, Mark D.; Humphrey, Victor F.; Doody, Claire

    2000-07-01

    Thermal safety indices for diagnostic ultrasound beams are calculated under the assumption that the sound propagates under linear conditions. A non-axisymmetric finite difference model is used to solve the KZK equation, and so to model the beam of a diagnostic scanner in pulsed Doppler mode. Beams from both a uniform focused rectangular source and a linear array are considered. Calculations are performed in water, and in attenuating media with tissue-like characteristics. Attenuating media are found to exhibit significant nonlinear effects for finite-amplitude beams. The resulting loss of intensity by the beam is then used as the source term in a model of tissue heating to estimate the maximum temperature rises. These are compared with the thermal indices, derived from the properties of the water-propagated beams.

  19. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  20. Sources and contents of air pollution affecting term low birth weight in Los Angeles County, California, 2001-2008.

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Cockburn, Myles; Escobedo, Loraine; Kleeman, Michael J; Wu, Jun

    2014-10-01

    Low birth weight (LBW, <2500 g) has been associated with exposure to air pollution, but it is still unclear which sources or components of air pollution might be in play. The association between ultrafine particles and LBW has never been studied. To study the relationships between LBW in term born infants and exposure to particles by size fraction, source and chemical composition, and complementary components of air pollution in Los Angeles County (California, USA) over the period 2001-2008. Birth certificates (n=960,945) were geocoded to maternal residence. Primary particulate matter (PM) concentrations by source and composition were modeled. Measured fine PM, nitrogen dioxide and ozone concentrations were interpolated using empirical Bayesian kriging. Traffic indices were estimated. Associations between LBW and air pollution metrics were examined using generalized additive models, adjusting for maternal age, parity, race/ethnicity, education, neighborhood income, gestational age and infant sex. Increased LBW risks were associated with the mass of primary fine and ultrafine PM, with several major sources (especially gasoline, wood burning and commercial meat cooking) of primary PM, and chemical species in primary PM (elemental and organic carbon, potassium, iron, chromium, nickel, and titanium but not lead or arsenic). Increased LBW risks were also associated with total fine PM mass, nitrogen dioxide and local traffic indices (especially within 50 m from home), but not with ozone. Stronger associations were observed in infants born to women with low socioeconomic status, chronic hypertension, diabetes and a high body mass index. This study supports previously reported associations between traffic-related pollutants and LBW and suggests other pollution sources and components, including ultrafine particles, as possible risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A critical review of principal traffic noise models: Strategies and implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar

    2014-04-01

    The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less

  2. A Source-Term Based Boundary Layer Bleed/Effusion Model for Passive Shock Control

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Norris, Andrew T.

    2011-01-01

    A modeling framework for boundary layer effusion has been developed based on the use of source (or sink) terms instead of the usual practice of specifying bleed directly as a boundary condition. This framework allows the surface boundary condition (i.e. isothermal wall, adiabatic wall, slip wall, etc.) to remain unaltered in the presence of bleed. This approach also lends itself to easily permit the addition of empirical models for second order effects that are not easily accounted for by simply defining effective transpiration values. Two effusion models formulated for supersonic flows have been implemented into this framework; the Doerffer/Bohning law and the Slater formulation. These models were applied to unit problems that contain key aspects of the flow physics applicable to bleed systems designed for hypersonic air-breathing propulsion systems. The ability of each model to predict bulk bleed properties was assessed, as well as the response of the boundary layer as it passes through and downstream of a porous bleed system. The model assessment was performed with and without the presence of shock waves. Three-dimensional CFD simulations that included the geometric details of the porous plate bleed systems were also carried out to supplement the experimental data, and provide additional insights into the bleed flow physics. Overall, both bleed formulations fared well for the tests performed in this study. However, the sample of test problems considered in this effort was not large enough to permit a comprehensive validation of the models.

  3. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  4. Impact of Emissions and Long-Range Transport on Multi-Decadal Aerosol Trends: Implications for Air Quality and Climate

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2012-01-01

    We present a global model analysis of the impact of long-range transport and anthropogenic emissions on the aerosol trends in the major pollution regions in the northern hemisphere and in the Arctic in the past three decades. We will use the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model to analyze the multi-spatial and temporal scale data, including observations from Terra, Aqua, and CALIPSO satellites and from the long-term surface monitoring stations. We will analyze the source attribution (SA) and source-receptor (SR) relationships in North America, Europe, East Asia, South Asia, and the Arctic at the surface and free troposphere and establish the quantitative linkages between emissions from different source regions. We will discuss the implications for regional air quality and climate change.

  5. Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy logic tool.

    PubMed

    Nobre, R C M; Rotunno Filho, O C; Mansur, W J; Nobre, M M M; Cosenza, C A N

    2007-12-07

    A groundwater vulnerability and risk mapping assessment, based on a source-pathway-receptor approach, is presented for an urban coastal aquifer in northeastern Brazil. A modified version of the DRASTIC methodology was used to map the intrinsic and specific groundwater vulnerability of a 292 km(2) study area. A fuzzy hierarchy methodology was adopted to evaluate the potential contaminant source index, including diffuse and point sources. Numerical modeling was performed for delineation of well capture zones, using MODFLOW and MODPATH. The integration of these elements provided the mechanism to assess groundwater pollution risks and identify areas that must be prioritized in terms of groundwater monitoring and restriction on use. A groundwater quality index based on nitrate and chloride concentrations was calculated, which had a positive correlation with the specific vulnerability index.

  6. Documentation of volume 3 of the 1978 Energy Information Administration annual report to congress

    NASA Astrophysics Data System (ADS)

    1980-02-01

    In a preliminary overview of the projection process, the relationship between energy prices, supply, and demand is addressed. Topics treated in detail include a description of energy economic interactions, assumptions regarding world oil prices, and energy modeling in the long term beyond 1995. Subsequent sections present the general approach and methodology underlying the forecasts, and define and describe the alternative projection series and their associated assumptions. Short term forecasting, midterm forecasting, long term forecasting of petroleum, coal, and gas supplies are included. The role of nuclear power as an energy source is also discussed.

  7. REVIEW OF VOLATILE ORGANIC COMPOUND SOURCE APPORTIONMENT BY CHEMICAL MASS BALANCE. (R826237)

    EPA Science Inventory

    The chemical mass balance (CMB) receptor model has apportioned volatile organic compounds (VOCs) in more than 20 urban areas, mostly in the United States. These applications differ in terms of the total fraction apportioned, the calculation method, the chemical compounds used ...

  8. The Current Status of Behaviorism and Neurofeedback

    ERIC Educational Resources Information Center

    Fultz, Dwight E.

    2009-01-01

    There appears to be no dominant conceptual model for the process and outcomes of neurofeedback among practitioners or manufacturers. Behaviorists are well-positioned to develop a neuroscience-based source code in which neural activity is described in behavioral terms, providing a basis for behavioral conceptualization and education of…

  9. Parametrized energy spectrum of cosmic-ray protons with kinetic energies down to 1 GeV

    NASA Technical Reports Server (NTRS)

    Tan, L. C.

    1985-01-01

    A new estimation of the interstellar proton spectrum is made in which the source term of primary protons is taken from shock acceleration theory and the cosmic ray propagation calculation is based on a proposed nonuniform galactic disk model.

  10. Dynamical model for the toroidal sporadic meteors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokorný, Petr; Vokrouhlický, David; Nesvorný, David

    More than a decade of radar operations by the Canadian Meteor Orbit Radar have allowed both young and moderately old streams to be distinguished from the dispersed sporadic background component. The latter has been categorized according to broad radiant regions visible to Earth-based observers into three broad classes: the helion and anti-helion source, the north and south apex sources, and the north and south toroidal sources (and a related arc structure). The first two are populated mainly by dust released from Jupiter-family comets and new comets. Proper modeling of the toroidal sources has not to date been accomplished. Here, wemore » develop a steady-state model for the toroidal source of the sporadic meteoroid complex, compare our model with the available radar measurements, and investigate a contribution of dust particles from our model to the whole population of sporadic meteoroids. We find that the long-term stable part of the toroidal particles is mainly fed by dust released by Halley type (long period) comets (HTCs). Our synthetic model reproduces most of the observed features of the toroidal particles, including the most troublesome low-eccentricity component, which is due to a combination of two effects: particles' ability to decouple from Jupiter and circularize by the Poynting-Robertson effect, and large collision probability for orbits similar to that of the Earth. Our calibrated model also allows us to estimate the total mass of the HTC-released dust in space and check the flux necessary to maintain the cloud in a steady state.« less

  11. Numerical models analysis of energy conversion process in air-breathing laser propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Yanji; Song Junling; Cui Cunyan

    Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.

  12. Lean Six Sigma Project - Defense Logistics Agency/Honeywell Long-Term Contract Model Using One-Pass Pricing for Sole-Source Spare Parts

    DTIC Science & Technology

    2011-02-18

    Control Limit Lower Control Limit Reaction Plan 1 Complaints from other suppliers (synopsis, award) SCG During award process Identify Sole- Source...Parts 0.0 1.0 0.0 Evaluate complaint, if valid remove item from contract. 2 Tracking timeline for procurement/reviews SCG During pre- award process...Review Solicitation 100.0 Determine where the document stands in the approval process. Adjust milestones and followup . 3 FAR/DPAP guidance SCG

  13. Role of large scale energy systems models in R and D planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamontagne, J.

    1980-11-01

    Long-term energy policy deals with the problem of finite supplies of convenient energy sources becoming more costly as they are depleted. The development of alternative technologies to provide new sources of energy and extend the lives of current ones is an attractive option available to government. Thus, one aspect of long-term energy policy involves investment in R and D. The importance of the problems addressed by R and D to the future of society (especially with regard to energy) dictates adoption of a cogent approach to resource allocation and to the designation of priorities for R and D. It ismore » hoped that energy systems models when properly used can provide useful inputs to this process. The influence of model results on energy policy makers who are not knowledgable about flaws or uncertainties in the models, errors in assumptions in model inputs which can result in faulty forecasts, the overall usefulness of energy system models, and model limitations are discussed. It is suggested that the large scale energy systems models currently used for assessing a broad spectrum of policy issues need to be replaced with reasonably simple models capable of dealing with uncertainty in a straightforward manner, and their methodologies and the meaning of their results should be transparent, especially to those removed from the modeling process. Energy models should be clearly related to specific issues. Methodologies should be clearly related to specific decisions, and should allow adjustments to be easily made for alternative assumptions and for additional knowledge gained during the evolution of the energy system. (LCL)« less

  14. A framework for emissions source apportionment in industrial areas: MM5/CALPUFF in a near-field application.

    PubMed

    Ghannam, K; El-Fadel, M

    2013-02-01

    This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.

  15. 77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...

  16. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  17. Factors affecting stream nutrient loads: A synthesis of regional SPARROW model results for the continental United States

    USGS Publications Warehouse

    Preston, Stephen D.; Alexander, Richard B.; Schwarz, Gregory E.; Crawford, Charles G.

    2011-01-01

    We compared the results of 12 recently calibrated regional SPARROW (SPAtially Referenced Regressions On Watershed attributes) models covering most of the continental United States to evaluate the consistency and regional differences in factors affecting stream nutrient loads. The models - 6 for total nitrogen and 6 for total phosphorus - all provide similar levels of prediction accuracy, but those for major river basins in the eastern half of the country were somewhat more accurate. The models simulate long-term mean annual stream nutrient loads as a function of a wide range of known sources and climatic (precipitation, temperature), landscape (e.g., soils, geology), and aquatic factors affecting nutrient fate and transport. The results confirm the dominant effects of urban and agricultural sources on stream nutrient loads nationally and regionally, but reveal considerable spatial variability in the specific types of sources that control water quality. These include regional differences in the relative importance of different types of urban (municipal and industrial point vs. diffuse urban runoff) and agriculture (crop cultivation vs. animal waste) sources, as well as the effects of atmospheric deposition, mining, and background (e.g., soil phosphorus) sources on stream nutrients. Overall, we found that the SPARROW model results provide a consistent set of information for identifying the major sources and environmental factors affecting nutrient fate and transport in United States watersheds at regional and subregional scales. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  18. Bubble dynamics in viscoelastic soft tissue in high-intensity focal ultrasound thermal therapy.

    PubMed

    Zilonova, E; Solovchuk, M; Sheu, T W H

    2018-01-01

    The present study is aimed to investigate bubble dynamics in a soft tissue, to which HIFU's continuous harmonic pulse is applied by introducing a viscoelastic cavitation model. After a comparison of some existing cavitation models, we decided to employ Gilmore-Akulichev model. This chosen cavitation model should be coupled with the Zener viscoelastic model in order to be able to simulate soft tissue features such as elasticity and relaxation time. The proposed Gilmore-Akulichev-Zener model was investigated for exploring cavitation dynamics. The parametric study led us to the conclusion that the elasticity and viscosity both damp bubble oscillations, whereas the relaxation effect depends mainly on the period of the ultrasound wave. The similar influence of elasticity, viscosity and relaxation time on the temperature inside the bubble can be observed. Cavitation heat source terms (corresponding to viscous damping and pressure wave radiated by bubble collapse) were obtained based on the proposed model to examine the cavitation significance during the treatment process. Their maximum values both overdominate the acoustic ultrasound term in HIFU applications. Elasticity was revealed to damp a certain amount of deposited heat for both cavitation terms. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Dilatation-dissipation corrections for advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.

  20. A method for the development of disease-specific reference standards vocabularies from textual biomedical literature resources

    PubMed Central

    Wang, Liqin; Bray, Bruce E.; Shi, Jianlin; Fiol, Guilherme Del; Haug, Peter J.

    2017-01-01

    Objective Disease-specific vocabularies are fundamental to many knowledge-based intelligent systems and applications like text annotation, cohort selection, disease diagnostic modeling, and therapy recommendation. Reference standards are critical in the development and validation of automated methods for disease-specific vocabularies. The goal of the present study is to design and test a generalizable method for the development of vocabulary reference standards from expert-curated, disease-specific biomedical literature resources. Methods We formed disease-specific corpora from literature resources like textbooks, evidence-based synthesized online sources, clinical practice guidelines, and journal articles. Medical experts annotated and adjudicated disease-specific terms in four classes (i.e., causes or risk factors, signs or symptoms, diagnostic tests or results, and treatment). Annotations were mapped to UMLS concepts. We assessed source variation, the contribution of each source to build disease-specific vocabularies, the saturation of the vocabularies with respect to the number of used sources, and the generalizability of the method with different diseases. Results The study resulted in 2588 string-unique annotations for heart failure in four classes, and 193 and 425 respectively for pulmonary embolism and rheumatoid arthritis in treatment class. Approximately 80% of the annotations were mapped to UMLS concepts. The agreement among heart failure sources ranged between 0.28 and 0.46. The contribution of these sources to the final vocabulary ranged between 18% and 49%. With the sources explored, the heart failure vocabulary reached near saturation in all four classes with the inclusion of minimal six sources (or between four to seven sources if only counting terms occurred in two or more sources). It took fewer sources to reach near saturation for the other two diseases in terms of the treatment class. Conclusions We developed a method for the development of disease-specific reference vocabularies. Expert-curated biomedical literature resources are substantial for acquiring disease-specific medical knowledge. It is feasible to reach near saturation in a disease-specific vocabulary using a relatively small number of literature sources. PMID:26971304

  1. Norovirus Dynamics in Wastewater Discharges and in the Recipient Drinking Water Source: Long-Term Monitoring and Hydrodynamic Modeling.

    PubMed

    Dienus, Olaf; Sokolova, Ekaterina; Nyström, Fredrik; Matussek, Andreas; Löfgren, Sture; Blom, Lena; Pettersson, Thomas J R; Lindgren, Per-Eric

    2016-10-04

    Norovirus (NoV) that enters drinking water sources with wastewater discharges is a common cause of waterborne outbreaks. The impact of wastewater treatment plants (WWTPs) on the river Göta älv (Sweden) was studied using monitoring and hydrodynamic modeling. The concentrations of NoV genogroups (GG) I and II in samples collected at WWTPs and drinking water intakes (source water) during one year were quantified using duplex real-time reverse-transcription polymerase chain reaction. The mean (standard deviation) NoV GGI and GGII genome concentrations were 6.2 (1.4) and 6.8 (1.8) in incoming wastewater and 5.3 (1.4) and 5.9 (1.4) log 10 genome equivalents (g.e.) L -1 in treated wastewater, respectively. The reduction at the WWTPs varied between 0.4 and 1.1 log 10 units. In source water, the concentration ranged from below the detection limit to 3.8 log 10 g.e. L -1 . NoV GGII was detected in both wastewater and source water more frequently during the cold than the warm period of the year. The spread of NoV in the river was simulated using a three-dimensional hydrodynamic model. The modeling results indicated that the NoV GGI and GGII genome concentrations in source water may occasionally be up to 2.8 and 1.9 log 10 units higher, respectively, than the concentrations measured during the monitoring project.

  2. A Laboratory Study of River Discharges into Shallow Seas

    NASA Astrophysics Data System (ADS)

    Crawford, T. J.; Linden, P. F.

    2016-02-01

    We present an experimental study that aims to simulate the buoyancy driven coastal currents produced by estuarine freshwater discharges into the ocean. The currents are generated inside a rotating tank filled with saltwater by the continuous release of buoyant freshwater from a source structure located at the fluid surface. The freshwater is discharged horizontally from a finite-depth source, giving rise to significant momentum-flux effects and a non-zero potential vorticity. We perform a parametric study in which we vary the rotation rate, freshwater discharge magnitude, the density difference and the source cross-sectional area. The parameter values are chosen to match the regimes appropriate to the River Rhine and River Elbe when entering the North Sea. Persistent features of an anticyclonic outflow vortex and a propagating boundary current were identified and their properties quantified. We also present a finite potential vorticity, geostrophic model that provides theoretical predictions for the current height, width and velocity as functions of the experimental parameters. The experiments and model are compared with each other in terms of a set of non-dimensional parameters identified in the theoretical analysis of the problem. Good agreement between the model and the experimental data is found. The effect of mixing in the turbulent ocean is also addressed with the addition of an oscillating grid to the experimental setup. The grid generates turbulence in the saltwater ambient that is designed to represent the mixing effects of the wind, tides and bathymetry in a shallow shelf sea. The impact of the addition of turbulence is discussed in terms of the experimental data and through modifications to the theoretical model to include mixing. Once again, good agreement is seen between the experiments and the model.

  3. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  4. Modeling radon daughter deposition rates for low background detectors

    NASA Astrophysics Data System (ADS)

    Westerdale, S.; Guiseppe, V. E.; Rielage, K.; Elliot, S. R.; Hime, A.

    2009-10-01

    Detectors such as those looking for dark matter and those working to detect neutrinoless double-beta decay require record low levels of background radiation. One major source of background radiation is from radon daughters that decay from airborne radon. In particular, ^222Rn decay products may be deposited on any detector materials that are exposed to environmental radon. Long-lasting daughters, especially ^210Pb, can pose a long-term background radiation source that can interfere with the detectors' measurements by emitting alpha particles into sensitive parts of the detectors. A better understanding of this radon daughter deposition will allow for preventative actions to be taken to minimize the amount of noise from this source. A test stand has therefore been set up to study the impact of various environmental factors on the rate of radon daughter deposition so that a model can be constructed. Results from the test stand and a model of radon daughter deposition will be presented.

  5. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  6. Development of Accommodation Models for Soldiers in Vehicles: Squad

    DTIC Science & Technology

    2014-09-01

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Data from a previous study of Soldier posture and position were analyzed to develop statistical...range of seat height and seat back angle. All of the models include the effects of body armor and body borne gear. 15. SUBJECT TERMS Anthropometry

  7. Numerical and experimental study of the fundamental flow characteristics of a 3D gully box under drainage.

    PubMed

    Lopes, Pedro; Carvalho, Rita F; Leandro, Jorge

    2017-05-01

    Numerical studies regarding the influence of entrapped air on the hydraulic performance of gullies are nonexistent. This is due to the lack of a model that simulates the air-entrainment phenomena and consequently the entrapped air. In this work, we used experimental data to validate an air-entrainment model that uses a Volume-of-Fluid based method to detect the interface and the Shear-stress transport k-ω turbulence model. The air is detected in a sub-grid scale, generated by a source term and transported using a slip velocity formulation. Results are shown in terms of free-surface elevation, velocity profiles, turbulent kinetic energy and discharge coefficients. The air-entrainment model allied to the turbulence model showed a good accuracy in the prediction of the zones of the gully where the air is more concentrated.

  8. Multi-criteria analysis for PM10 planning

    NASA Astrophysics Data System (ADS)

    Pisoni, Enrico; Carnevale, Claudio; Volta, Marialuisa

    To implement sound air quality policies, Regulatory Agencies require tools to evaluate outcomes and costs associated to different emission reduction strategies. These tools are even more useful when considering atmospheric PM10 concentrations due to the complex nonlinear processes that affect production and accumulation of the secondary fraction of this pollutant. The approaches presented in the literature (Integrated Assessment Modeling) are mainly cost-benefit and cost-effective analysis. In this work, the formulation of a multi-objective problem to control particulate matter is proposed. The methodology defines: (a) the control objectives (the air quality indicator and the emission reduction cost functions); (b) the decision variables (precursor emission reductions); (c) the problem constraints (maximum feasible technology reductions). The cause-effect relations between air quality indicators and decision variables are identified tuning nonlinear source-receptor models. The multi-objective problem solution provides to the decision maker a set of not-dominated scenarios representing the efficient trade-off between the air quality benefit and the internal costs (emission reduction technology costs). The methodology has been implemented for Northern Italy, often affected by high long-term exposure to PM10. The source-receptor models used in the multi-objective analysis are identified processing long-term simulations of GAMES multiphase modeling system, performed in the framework of CAFE-Citydelta project.

  9. Application of the DG-1199 methodology to the ESBWR and ABWR.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinich, Donald A.; Gauntt, Randall O.; Walton, Fotini

    2010-09-01

    Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Populationmore » Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.« less

  10. Mathematical modeling of enzyme production using Trichoderma harzianum P49P11 and sugarcane bagasse as carbon source.

    PubMed

    Gelain, Lucas; da Cruz Pradella, José Geraldo; da Costa, Aline Carvalho

    2015-12-01

    A mathematical model to describe the kinetics of enzyme production by the filamentous fungus Trichoderma harzianum P49P11 was developed using a low cost substrate as main carbon source (pretreated sugarcane bagasse). The model describes the cell growth, variation of substrate concentration and production of three kinds of enzymes (cellulases, beta-glucosidase and xylanase) in different sugarcane bagasse concentrations (5; 10; 20; 30; 40 gL(-1)). The 10 gL(-1) concentration was used to validate the model and the other to parameter estimation. The model for enzyme production has terms implicitly representing induction and repression. Substrate variation was represented by a simple degradation rate. The models seem to represent well the kinetics with a good fit for the majority of the assays. Validation results indicate that the models are adequate to represent the kinetics for a biotechnological process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Balancing practicality and hydrologic realism: a parsimonious approach for simulating rapid groundwater recharge via unsaturated-zone preferential flow

    USGS Publications Warehouse

    Mirus, Benjamin B.; Nimmo, J.R.

    2013-01-01

    The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.

  12. Measurement of erosion in helicon plasma thrusters using the VASIMR® VX-CR device

    NASA Astrophysics Data System (ADS)

    Del Valle Gamboa, Juan Ignacio; Castro-Nieto, Jose; Squire, Jared; Carter, Mark; Chang-Diaz, Franklin

    2015-09-01

    The helicon plasma source is one of the principal stages of the high-power VASIMR® electric propulsion system. The VASIMR® VX-CR experiment focuses solely on this stage, exploring the erosion and long-term operation effects of the VASIMR helicon source. We report on the design and operational parameters of the VX-CR experiment, and the development of modeling tools and characterization techniques allowing the study of erosion phenomena in helicon plasma sources in general, and stand-alone helicon plasma thrusters (HPTs) in particular. A thorough understanding of the erosion phenomena within HPTs will enable better predictions of their behavior as well as more accurate estimations of their expected lifetime. We present a simplified model of the plasma-wall interactions within HPTs based on current models of the plasma density distributions in helicon discharges. Results from this modeling tool are used to predict the erosion within the plasma-facing components of the VX-CR device. Experimental techniques to measure actual erosion, including the use of coordinate-measuring machines and microscopy, will be discussed.

  13. Survey of current situation in radiation belt modeling

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.

    2004-01-01

    The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.

  14. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  15. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  16. pyBadlands: A framework to simulate sediment transport, landscape dynamics and basin stratigraphic evolution through space and time

    PubMed Central

    2018-01-01

    Understanding Earth surface responses in terms of sediment dynamics to climatic variability and tectonics forcing is hindered by limited ability of current models to simulate long-term evolution of sediment transfer and associated morphological changes. This paper presents pyBadlands, an open-source python-based framework which computes over geological time (1) sediment transport from landmasses to coasts, (2) reworking of marine sediments by longshore currents and (3) development of coral reef systems. pyBadlands is cross-platform, distributed under the GPLv3 license and available on GitHub (http://github.com/badlands-model). Here, we describe the underlying physical assumptions behind the simulated processes and the main options already available in the numerical framework. Along with the source code, a list of hands-on examples is provided that illustrates the model capabilities. In addition, pre and post-processing classes have been built and are accessible as a companion toolbox which comprises a series of workflows to efficiently build, quantify and explore simulation input and output files. While the framework has been primarily designed for research, its simplicity of use and portability makes it a great tool for teaching purposes. PMID:29649301

  17. Modeling greenhouse gas emissions from dairy farms

    USDA-ARS?s Scientific Manuscript database

    Dairy farms have been identified as an important source of greenhouse gas emissions. Within the farm, important emissions include enteric methane (CH4) from the animals, CH4 and nitrous oxide (N2O) from manure in housing facilities, during long-term storage and during field application, and N2O from...

  18. Modeling and Mapping of Human Source Data

    DTIC Science & Technology

    2011-03-08

    interest is sometimes termed ― gamification ‖. Initial experiments are described by McGill [26]. On a final note, it is possible to leverage other...Society Annual Meeting Proceedings, (5), pp 433 – 437, 2010 [26] W. McGill, ―The Gamification of Risk Management‖, internet blog at http

  19. LIGHT NONAQUEOUS-PHASE LIQUID HYDROCARBON WEATHERING AT SOME JP-4 FUEL RELEASE SITES

    EPA Science Inventory

    A fuel weathering study was conducted for database entries to estimate natural light, nonaqueousphase
    liquid weathering and source-term reduction rates for use in natural attenuation models. A range of BTEX
    weathering rates from mobile LNAPL plumes at eight field sites with...

  20. Deciphering acoustic emission signals in drought stressed branches: the missing link between source and sensor.

    PubMed

    Vergeynst, Lidewei L; Sause, Markus G R; Hamstad, Marvin A; Steppe, Kathy

    2015-01-01

    When drought occurs in plants, acoustic emission (AE) signals can be detected, but the actual causes of these signals are still unknown. By analyzing the waveforms of the measured signals, it should, however, be possible to trace the characteristics of the AE source and get information about the underlying physiological processes. A problem encountered during this analysis is that the waveform changes significantly from source to sensor and lack of knowledge on wave propagation impedes research progress made in this field. We used finite element modeling and the well-known pencil lead break source to investigate wave propagation in a branch. A cylindrical rod of polyvinyl chloride was first used to identify the theoretical propagation modes. Two wave propagation modes could be distinguished and we used the finite element model to interpret their behavior in terms of source position for both the PVC rod and a wooden rod. Both wave propagation modes were also identified in drying-induced signals from woody branches, and we used the obtained insights to provide recommendations for further AE research in plant science.

  1. On the effect of using the Shapiro filter to smooth winds on a sphere

    NASA Technical Reports Server (NTRS)

    Takacs, L. L.; Balgovind, R. C.

    1984-01-01

    Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.

  2. Aeromicrobiology/air quality

    USGS Publications Warehouse

    Andersen, Gary L.; Frisch, A.S.; Kellogg, Christina A.; Levetin, E.; Lighthart, Bruce; Paterno, D.

    2009-01-01

    The most prevalent microorganisms, viruses, bacteria, and fungi, are introduced into the atmosphere from many anthropogenic sources such as agricultural, industrial and urban activities, termed microbial air pollution (MAP), and natural sources. These include soil, vegetation, and ocean surfaces that have been disturbed by atmospheric turbulence. The airborne concentrations range from nil to great numbers and change as functions of time of day, season, location, and upwind sources. While airborne, they may settle out immediately or be transported great distances. Further, most viable airborne cells can be rendered nonviable due to temperature effects, dehydration or rehydration, UV radiation, and/or air pollution effects. Mathematical microbial survival models that simulate these effects have been developed.

  3. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  4. Parallel Tracks as Quasi-steady States for the Magnetic Boundary Layers in Neutron-star Low-mass X-Ray Binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkut, M. Hakan; Çatmabacak, Onur, E-mail: mherkut@gmail.com

    The neutron stars in low-mass X-ray binaries (LMXBs) are usually thought to be weakly magnetized objects accreting matter from their low-mass companions in the form of a disk. Albeit weak compared to those in young neutron-star systems, the neutron-star magnetospheres in LMXBs can play an important role in determining the correlations between spectral and temporal properties. Parallel tracks appearing in the kilohertz (kHz) quasi-periodic oscillation (QPO) frequency versus X-ray flux plane can be used as a tool to study the magnetosphere–disk interaction in neutron-star LMXBs. For dynamically important weak fields, the formation of a non-Keplerian magnetic boundary layer at themore » innermost disk truncated near the surface of the neutron star is highly likely. Such a boundary region may harbor oscillatory modes of frequencies in the kHz range. We generate parallel tracks using the boundary region model of kHz QPOs. We also present the direct application of our model to the reproduction of the observed parallel tracks of individual sources such as 4U 1608–52, 4U 1636–53, and Aql X-1. We reveal how the radial width of the boundary layer must vary in the long-term flux evolution of each source to regenerate the parallel tracks. The run of the radial width looks similar for different sources and can be fitted by a generic model function describing the average steady behavior of the boundary region over the long term. The parallel tracks then correspond to the possible quasi-steady states the source can occupy around the average trend.« less

  5. Parallel Tracks as Quasi-steady States for the Magnetic Boundary Layers in Neutron-star Low-mass X-Ray Binaries

    NASA Astrophysics Data System (ADS)

    Erkut, M. Hakan; Çatmabacak, Onur

    2017-11-01

    The neutron stars in low-mass X-ray binaries (LMXBs) are usually thought to be weakly magnetized objects accreting matter from their low-mass companions in the form of a disk. Albeit weak compared to those in young neutron-star systems, the neutron-star magnetospheres in LMXBs can play an important role in determining the correlations between spectral and temporal properties. Parallel tracks appearing in the kilohertz (kHz) quasi-periodic oscillation (QPO) frequency versus X-ray flux plane can be used as a tool to study the magnetosphere-disk interaction in neutron-star LMXBs. For dynamically important weak fields, the formation of a non-Keplerian magnetic boundary layer at the innermost disk truncated near the surface of the neutron star is highly likely. Such a boundary region may harbor oscillatory modes of frequencies in the kHz range. We generate parallel tracks using the boundary region model of kHz QPOs. We also present the direct application of our model to the reproduction of the observed parallel tracks of individual sources such as 4U 1608-52, 4U 1636-53, and Aql X-1. We reveal how the radial width of the boundary layer must vary in the long-term flux evolution of each source to regenerate the parallel tracks. The run of the radial width looks similar for different sources and can be fitted by a generic model function describing the average steady behavior of the boundary region over the long term. The parallel tracks then correspond to the possible quasi-steady states the source can occupy around the average trend.

  6. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yidong Xia; Mitch Plummer; Robert Podgorney

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less

  7. Radiological analysis of plutonium glass batches with natural/enriched boron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    2000-06-22

    The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less

  8. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  9. Analyzing the contribution of climate change to long-term variations in sediment nitrogen sources for reservoirs/lakes.

    PubMed

    Xia, Xinghui; Wu, Qiong; Zhu, Baotong; Zhao, Pujun; Zhang, Shangwei; Yang, Lingyan

    2015-08-01

    We applied a mixing model based on stable isotopic δ(13)C, δ(15)N, and C:N ratios to estimate the contributions of multiple sources to sediment nitrogen. We also developed a conceptual model describing and analyzing the impacts of climate change on nitrogen enrichment. These two models were conducted in Miyun Reservoir to analyze the contribution of climate change to the variations in sediment nitrogen sources based on two (210)Pb and (137)Cs dated sediment cores. The results showed that during the past 50years, average contributions of soil and fertilizer, submerged macrophytes, N2-fixing phytoplankton, and non-N2-fixing phytoplankton were 40.7%, 40.3%, 11.8%, and 7.2%, respectively. In addition, total nitrogen (TN) contents in sediment showed significant increasing trends from 1960 to 2010, and sediment nitrogen of both submerged macrophytes and phytoplankton sources exhibited significant increasing trends during the past 50years. In contrast, soil and fertilizer sources showed a significant decreasing trend from 1990 to 2010. According to the changing trend of N2-fixing phytoplankton, changes of temperature and sunshine duration accounted for at least 43% of the trend in the sediment nitrogen enrichment over the past 50years. Regression analysis of the climatic factors on nitrogen sources showed that the contributions of precipitation, temperature, and sunshine duration to the variations in sediment nitrogen sources ranged from 18.5% to 60.3%. The study demonstrates that the mixing model provides a robust method for calculating the contribution of multiple nitrogen sources in sediment, and this study also suggests that N2-fixing phytoplankton could be regarded as an important response factor for assessing the impacts of climate change on nitrogen enrichment. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  11. Testing a model-driven Geographical Information System for risk assessment during an effusive volcanic crisis

    NASA Astrophysics Data System (ADS)

    Harris, Andrew; Latutrie, Benjamin; Andredakis, Ioannis; De Groeve, Tom; Langlois, Eric; van Wyk de Vries, Benjamin; Del Negro, Ciro; Favalli, Massimiliano; Fujita, Eisuke; Kelfoun, Karim; Rongo, Rocco

    2016-04-01

    RED-SEED stands for Risk Evaluation, Detection and Simulation during Effusive Eruption Disasters, and combines stakeholders from the remote sensing, modeling and response communities with experience in tracking volcanic effusive events. It is an informal working group that has evolved around the philosophy of combining global scientific resources, in the realm of physical volcanology, remote sensing and modeling, to better define and limit uncertainty. The group first met during a three day-long workshop held in Clermont Ferrand (France) between 28 and 30 May 2013. The main recommendation of the workshop in terms of modeling was that there is a pressing need for "real-time input of reliable Time-Averaged Discharge Rate (TADR) data with regular up-dates of Digital Elevation Models (DEMs) if modeling is to be effective; the DEMs can be provided by the radar/photogrammetry community." We thus set up a test to explore (i) which model source terms are needed, (ii) how they can be provided and updated, and (iii) how can models be run and applied in an ensemble approach. The test used two hypothetical effusive events in the Chaîne des Puys (Auvergne, France), for which a prototype Geographical Information System (GIS) was set up to allow loss assessment during an effusive crisis. This system drew on all immediately available data for population, land use, communications, utility and building-type. After defining lava flow model source terms (vent location, effusion rate, lava chemistry, temperature, crystallinity and vesicularity), five operational lava flow emplacement models were run (DOWNFLOW, FLOWGO, LAVASIM, MAGFLOW and VOLCFLOW) to produce a projection for likelihood of impact for all pixels within the area covered by the GIS, based on agreement between models. The test thus aimed not to assess the model output, but instead to examine overlapping output. Next, inundation maps and damage reports for impacted zones were produced. The exercise identified several shortcomings of the modeling systems, but indicates that generation of a global response system for effusive crises that uses rapid-response model projections for lava inundation driven by real-time satellite hot spot detection - and open access data sets - is within the current capabilities of the community.

  12. Dynamic power balance analysis in JET

    NASA Astrophysics Data System (ADS)

    Matthews, G. F.; Silburn, S. A.; Challis, C. D.; Eich, T.; Iglesias, D.; King, D.; Sieglin, B.; Contributors, JET

    2017-12-01

    The full scale realisation of nuclear fusion as an energy source requires a detailed understanding of power and energy balance in current experimental devices. In this we explore whether a global power balance model in which some of the calibration factors applied to the source or sink terms are fitted to the data can provide insight into possible causes of any discrepancies in power and energy balance seen in the JET tokamak. We show that the dynamics in the power balance can only be properly reproduced by including the changes in the thermal stored energy which therefore provides an additional opportunity to cross calibrate other terms in the power balance equation. Although the results are inconclusive with respect to the original goal of identifying the source of the discrepancies in the energy balance, we do find that with optimised parameters an extremely good prediction of the total power measured at the outer divertor target can be obtained over a wide range of pulses with time resolution up to ∼25 ms.

  13. Model falsifiability and climate slow modes

    NASA Astrophysics Data System (ADS)

    Essex, Christopher; Tsonis, Anastasios A.

    2018-07-01

    The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.

  14. Sources, Transport, and Climate Impacts of Biomass Burning Aerosols

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2010-01-01

    In this presentation, I will first talk about fundamentals of modeling of biomass burning emissions of aerosols, then show the results of GOCART model simulated biomass burning aerosols. I will compare the model results with observations of satellite and ground-based network in terms of total aerosol optical depth, aerosol absorption optical depth, and vertical distributions. Finally the long-range transport of biomass burning aerosols and the climate effects will be addressed. I will also discuss the uncertainties associated with modeling and observations of biomass burning aerosols

  15. Sensitivity of WRF-chem predictions to dust source function specification in West Asia

    NASA Astrophysics Data System (ADS)

    Nabavi, Seyed Omid; Haimberger, Leopold; Samimi, Cyrus

    2017-02-01

    Dust storms tend to form in sparsely populated areas covered by only few observations. Dust source maps, known as source functions, are used in dust models to allocate a certain potential of dust release to each place. Recent research showed that the well known Ginoux source function (GSF), currently used in Weather Research and Forecasting Model coupled with Chemistry (WRF-chem), exhibits large errors over some regions in West Asia, particularly near the IRAQ/Syrian border. This study aims to improve the specification of this critical part of dust forecasts. A new source function based on multi-year analysis of satellite observations, called West Asia source function (WASF), is therefore proposed to raise the quality of WRF-chem predictions in the region. WASF has been implemented in three dust schemes of WRF-chem. Remotely sensed and ground-based observations have been used to verify the horizontal and vertical extent and location of simulated dust clouds. Results indicate that WRF-chem performance is significantly improved in many areas after the implementation of WASF. The modified runs (long term simulations over the summers 2008-2012, using nudging) have yielded an average increase of Spearman correlation between observed and forecast aerosol optical thickness by 12-16 percent points compared to control runs with standard source functions. They even outperform MACC and DREAM dust simulations over many dust source regions. However, the quality of the forecasts decreased with distance from sources, probably due to deficiencies in the transport and deposition characteristics of the forecast model in these areas.

  16. Gamma-ray burst theory: Back to the drawing board

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.

    1994-01-01

    Gamma-ray bursts have always been intriguing sources to study in terms of particle acceleration, but not since their discovery two decades ago has the theory of these objects been in such turmoil. Prior to the launch of Compton Gamma-Ray Observatory and observations by Burst and Transient Source Experiment (BATSE), there was strong evidence pointing to magnetized Galactic neutron stars as the sources of gamma-ray bursts. However, since BATSE the observational picture has changed dramatically, requiring much more distant and possibly cosmological sources. I review the history of gamma-ray burst theory from the era of growing consensus for nearby neutron stars to the recent explosion of halo and cosmological models and the impact of the present confusion on the particle acceleration problem.

  17. Joint optimization of source, mask, and pupil in optical lithography

    NASA Astrophysics Data System (ADS)

    Li, Jia; Lam, Edmund Y.

    2014-03-01

    Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.

  18. Correlation of recent fission product release data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less

  19. A New Characterization of the Compton Process in the ULX Spectra

    NASA Astrophysics Data System (ADS)

    Kobayashi, S.; Nakazawa, K.; Makishima, K.

    2015-07-01

    Ultra Luminous X-ray sources (ULXs) are unusually luminous point sources located at arms of spiral galaxies, and are candidates for the intermediate mass black holes (Makishima+2000). Their spectra make transition betweens power-law shapes (PL state) and convex shapes (disk-like state). The latter state can be explained with either the multi-color disk (MCD)+thermal Comptonization (THC) model or a Slim disk model (Watari+2000). We adopt the former modeling, because it generally gives physically more reasonable parameters (Miyawaki+2009). To characterize the ULXs spectra with a unified way, we applied the MCD+THC model to several datasets of ULXs obtained by Suzaku, XMM-Newton, and Nu-Star. The model well explains all the spectra, in terms of cool disk (T_{in}˜0.2 keV), and a cool thick (T_{e}˜2 keV, τ ˜10) corona. The derived parameters can be characterized by two new parameters. One is Q≡ T_{e}/T_{in} which describes balance between the Compton cooling and gravitational heating of the corona, while the other is f≡ L_{raw}/L_{tot}, namely, the directly-visible (without Comptonization) MCD luminosity. Then, the PL state spectra have been found to show Q˜10 and f˜0.7, while those of the disk-like state Q˜ 3 and f≤0.01. Thus, the two states are clearly separated in terms of Q and f.

  20. Multiple-relaxation-time color-gradient lattice Boltzmann model for simulating two-phase flows with high density ratio

    NASA Astrophysics Data System (ADS)

    Ba, Yan; Liu, Haihu; Li, Qing; Kang, Qinjun; Sun, Jinju

    2016-08-01

    In this paper we propose a color-gradient lattice Boltzmann (LB) model for simulating two-phase flows with high density ratio and high Reynolds number. The model applies a multirelaxation-time (MRT) collision operator to enhance the stability of the simulation. A source term, which is derived by the Chapman-Enskog analysis, is added into the MRT LB equation so that the Navier-Stokes equations can be exactly recovered. Also, a form of the equilibrium density distribution function is used to simplify the source term. To validate the proposed model, steady flows of a static droplet and the layered channel flow are first simulated with density ratios up to 1000. Small values of spurious velocities and interfacial tension errors are found in the static droplet test, and improved profiles of velocity are obtained by the present model in simulating channel flows. Then, two cases of unsteady flows, Rayleigh-Taylor instability and droplet splashing on a thin film, are simulated. In the former case, the density ratio of 3 and Reynolds numbers of 256 and 2048 are considered. The interface shapes and spike and bubble positions are in good agreement with the results of previous studies. In the latter case, the droplet spreading radius is found to obey the power law proposed in previous studies for the density ratio of 100 and Reynolds number up to 500.

  1. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  2. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  3. Identification of metapopulation dynamics among Northern Goshawks of the Alexander Archipelago, Alaska, and Coastal British Columbia

    USGS Publications Warehouse

    Sonsthagen, Sarah A.; McClaren, Erica L.; Doyle, Frank I.; Titus, K.; Sage, George K.; Wilson, Robert E.; Gust, Judy R.; Talbot, Sandra L.

    2012-01-01

    Northern Goshawks occupying the Alexander Archipelago, Alaska, and coastal British Columbia nest primarily in old-growth and mature forest, which results in spatial heterogeneity in the distribution of individuals across the landscape. We used microsatellite and mitochondrial data to infer genetic structure, gene flow, and fluctuations in population demography through evolutionary time. Patterns in the genetic signatures were used to assess predictions associated with the three population models: panmixia, metapopulation, and isolated populations. Population genetic structure was observed along with asymmetry in gene flow estimates that changed directionality at different temporal scales, consistent with metapopulation model predictions. Therefore, Northern Goshawk assemblages located in the Alexander Archipelago and coastal British Columbia interact through a metapopulation framework, though they may not fit the classic model of a metapopulation. Long-term population sources (coastal mainland British Columbia) and sinks (Revillagigedo and Vancouver islands) were identified. However, there was no trend through evolutionary time in the directionality of dispersal among the remaining assemblages, suggestive of a rescue-effect dynamic. Admiralty, Douglas, and Chichagof island complex appears to be an evolutionarily recent source population in the Alexander Archipelago. In addition, Kupreanof island complex and Kispiox Forest District populations have high dispersal rates to populations in close geographic proximity and potentially serve as local source populations. Metapopulation dynamics occurring in the Alexander Archipelago and coastal British Columbia by Northern Goshawks highlight the importance of both occupied and unoccupied habitats to long-term population persistence of goshawks in this region.

  4. Optimization of urban water supply portfolios combining infrastructure capacity expansion and water use decisions

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.

    2015-12-01

    The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.

  5. The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.

    PubMed

    Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M

    2005-08-15

    The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.

  6. Modeling mesoscale eddies

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.; Dubovikov, M. S.

    Mesoscale eddies are not resolved in coarse resolution ocean models and must be modeled. They affect both mean momentum and scalars. At present, no generally accepted model exists for the former; in the latter case, mesoscales are modeled with a bolus velocity u∗ to represent a sink of mean potential energy. However, comparison of u∗(model) vs. u∗ (eddy resolving code, [J. Phys. Ocean. 29 (1999) 2442]) has shown that u∗(model) is incomplete and that additional terms, "unrelated to thickness source or sinks", are required. Thus far, no form of the additional terms has been suggested. To describe mesoscale eddies, we employ the Navier-Stokes and scalar equations and a turbulence model to treat the non-linear interactions. We then show that the problem reduces to an eigenvalue problem for the mesoscale Bernoulli potential. The solution, which we derive in analytic form, is used to construct the momentum and thickness fluxes. In the latter case, the bolus velocity u∗ is found to contain two types of terms: the first type entails the gradient of the mean potential vorticity and represents a positive contribution to the production of mesoscale potential energy; the second type of terms, which is new, entails the velocity of the mean flow and represents a negative contribution to the production of mesoscale potential energy, or equivalently, a backscatter process whereby a fraction of the mesoscale potential energy is returned to the original reservoir of mean potential energy. This type of terms satisfies the physical description of the additional terms given by [J. Phys. Ocean. 29 (1999) 2442]. The mesoscale flux that enters the momentum equations is also contributed by two types of terms of the same physical nature as those entering the thickness flux. The potential vorticity flux is also shown to contain two types of terms: the first is of the gradient-type while the other terms entail the velocity of the mean flow. An expression is derived for the mesoscale diffusivity κM and for the mesoscale kinetic energy K in terms of the large-scale fields. The predicted κM( z) agrees with that of heuristic models. The complete mesoscale model in isopycnal coordinates is presented in Appendix D and can be used in coarse resolution ocean global circulation models.

  7. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  8. Using a Simple Knowledge Organization System to facilitate Catalogue and Search for the ESA CCI Open Data Portal

    NASA Astrophysics Data System (ADS)

    Wilson, Antony; Bennett, Victoria; Donegan, Steve; Juckes, Martin; Kershaw, Philip; Petrie, Ruth; Stephens, Ag; Waterfall, Alison

    2016-04-01

    The ESA Climate Change Initiative (CCI) is a €75m programme that runs from 2009-2016, with a goal to provide stable, long-term, satellite-based essential climate variable (ECV) data products for climate modellers and researchers. As part of the CCI, ESA have funded the Open Data Portal project to establish a central repository to bring together the data from these multiple sources and make it available in a consistent way, in order to maximise its dissemination amongst the international user community. Search capabilities are a critical component to attaining this goal. To this end, the project is providing dataset-level metadata in the form of ISO 19115 records served via a standard OGC CSW interface. In addition, the Open Data Portal is re-using the search system from the Earth System Grid Federation (ESGF), successfully applied to support CMIP5 (5th Coupled Model Intercomparison Project) and obs4MIPs. This uses a tightly defined controlled vocabulary of metadata terms, the DRS (The Data Reference Syntax) which encompass different aspects of the data. This system hs facilitated the construction of a powerful faceted search interface to enable users to discover data at the individual file level of granularity through ESGF's web portal frontend. The use of a consistent set of model experiments for CMIP5 allowed the definition of a uniform DRS for all model data served from ESGF. For CCI however, there are thirteen ECVs, each of which is derived from multiple sources and different science communities resulting in highly heterogeneous metadata. An analysis has been undertaken of the concepts in use, with the aim to produce a CCI DRS which could be provide a single authoritative source for cataloguing and searching the CCI data for the Open Data Portal. The use of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) to represent the DRS are a natural fit and provide controlled vocabularies as well as a way to represent relationships between similar terms used in different ECVs. An iterative approach has been adopted for the model development working closely with domain experts and drawing on practical experience working with content in the input datasets. Tooling has been developed to enable the definition of vocabulary terms via a simple spreadsheet format which can then be automatically converted into Turtle notation and uploaded to the CCI DRS vocabulary service. With a baseline model established, work is underway to develop an ingestion pipeline to import validated search metadata into the ESGF and OGC CSW search services. In addition to the search terms indexed into the ESGF search system, ISO 19115 records will also be similarly tagged during this process with search terms from the data model. In this way it will be possible to construct a faceted search user interface for the Portal which can yield linked search results for data both at the file and dataset level granularity. It is hoped that this will also provide a rich range of content for third-party organisations wishing to incorporate access to CCI data in their own applications and services.

  9. SU-E-T-554: Monte Carlo Calculation of Source Terms and Attenuation Lengths for Neutrons Produced by 50–200 MeV Protons On Brass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Paganetti, H

    2015-06-15

    Purpose: We used TOPAS (TOPAS wraps and extends Geant4 for medical physicists) to compare Geant4 physics models with published data for neutron shielding calculations. Subsequently, we calculated the source terms and attenuation lengths (shielding data) of the total ambient dose equivalent (TADE) in concrete for neutrons produced by protons in brass. Methods: Stage1: The Bertini and Binary nuclear models available in Geant4 were compared with published attenuation at depth of the TADE in concrete and iron. Stage2: Shielding data of the TADE in concrete was calculated for 50– 200 MeV proton beams on brass. Stage3: Shielding data from Stage2 wasmore » extrapolated for 235 MeV proton beams. This data was used in a point-line-source analytical model to calculate the ambient dose per unit therapeutic dose at two locations inside one treatment room at the Francis H Burr Proton Therapy Center. Finally, we compared these results with experimental data and full TOPAS simulations. Results: At larger angles (∼130o) the TADE in concrete calculated with the Bertini model was about 9 times larger than that calculated with the Binary model. The attenuation length in concrete calculated with the Binary model agreed with published data within 7%±0.4% (statistical uncertainty) for the deepest regions and 5%±0.1% for shallower regions. For iron the agreement was within 3%±0.1%. The ambient dose per therapeutic dose calculated with the Binary model, relative to the experimental data, was a ratio of 0.93±0.16 and 1.23±0.24 for two locations. The analytical model overestimated the dose by four orders of magnitude. These differences are attributed to the complexity of the geometry. Conclusion: The Binary and Bertini models gave comparable results, with the Binary model giving the best agreement with published data at large angle. Shielding data we calculated using the Binary model is useful for fast shielding calculations with other analytical models. This work was supported by National Cancer Institute Grant R01CA140735.« less

  10. Dynamic SPARROW Modeling of Nitrogen Flux with Climate and MODIS Vegetation Indices as Drivers

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Brakebill, J.; Schwarz, G.; Alexander, R. B.; Hirsch, R. M.; Nolin, A. W.; Macauley, M.; Zhang, Q.; Shih, J.; Wang, W.; Sproles, E.

    2011-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models are statistically calibrated and describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe a dynamically calibrated SPARROW model of total nitrogen flux in the Potomac River Basin based on seasonal water quality and watershed input data for 80 monitoring stations over the period 2000 to 2008. One challenge in dynamic modeling of reactive nitrogen is obtaining frequently-reported, spatially-detailed input data on the phenology of agricultural production and terrestrial vegetation. In this NASA-funded research, we use the Enhanced Vegetation Index (EVI) and gross primary productivity data from the Terra Satellite-borne MODIS sensor to parameterize seasonal uptake and release of nitrogen. The spatial reference frame of the model is a 16,000-reach, 1:100,000-scale stream network, and the computational time step is seasonal. Precipitation and temperature data are from PRISM. The model formulation allows for separate storage compartments for nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Removal of nitrogen from watershed storage to stream channels and to "permanent" sinks (deep groundwater and the atmosphere) occur as parallel first-order processes. We use the model to explore an important issue in nutrient management in the Potomac and other basins: the long-term response of total nitrogen flux to changing climate. We model the nitrogen flux response to projected seasonal and inter-annual changes in temperature and precipitation, but under current seasonal nitrogen inputs, as indicated by MODIS measures of productivity. Under these constant inter-annual inputs, changing temperature and precipitation is predicted to lead to flux changes as temporary basin stores of nitrogen either grow or shrink due to changing relative rates of nitrogen removal to the atmosphere and release to streams.

  11. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  12. Multi-Decadal Change of Atmospheric Aerosols and Their Effect on Surface Radiation

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Diehl, Thomas; Tan, Qian; Wild, Martin; Qian, Yun; Yu, Hongbin; Bian, Huisheng; Wang, Weiguo

    2012-01-01

    We present an investigation on multi-decadal changes of atmospheric aerosols and their effects on surface radiation using a global chemistry transport model along with the near-term to long-term data records. We focus on a 28-year time period of satellite era from 1980 to 2007, during which a suite of aerosol data from satellite observations and ground-based remote sensing and in-situ measurements have become available. We analyze the long-term global and regional aerosol optical depth and concentration trends and their relationship to the changes of emissions" and assess the role aerosols play in the multi-decadal change of solar radiation reaching the surface (known as "dimming" or "brightening") at different regions of the world, including the major anthropogenic source regions (North America, Europe, Asia) that have been experiencing considerable changes of emissions, dust and biomass burning regions that have large interannual variabilities, downwind regions that are directly affected by the changes in the source area, and remote regions that are considered to representing "background" conditions.

  13. Patterns formation in ferrofluids and solid dissolutions using stochastic models with dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Morales, Marco A.; Fernández-Cervantes, Irving; Agustín-Serrano, Ricardo; Anzo, Andrés; Sampedro, Mercedes P.

    2016-08-01

    A functional with interactions short-range and long-range low coarse-grained approximation is proposed. This functional satisfies models with dissipative dynamics A, B and the stochastic Swift-Hohenberg equation. Furthermore, terms associated with multiplicative noise source are added in these models. These models are solved numerically using the method known as fast Fourier transform. Results of the spatio-temporal dynamic show similarity with respect to patterns behaviour in ferrofluids phases subject to external fields (magnetic, electric and temperature), as well as with the nucleation and growth phenomena present in some solid dissolutions. As a result of the multiplicative noise effect over the dynamic, some microstructures formed by changing solid phase and composed by binary alloys of Pb-Sn, Fe-C and Cu-Ni, as well as a NiAl-Cr(Mo) eutectic composite material. The model A for active-particles with a non-potential term in form of quadratic gradient explain the formation of nanostructured particles of silver phosphate. With these models is shown that the underlying mechanisms in the patterns formation in all these systems depends of: (a) dissipative dynamics; (b) the short-range and long-range interactions and (c) the appropiate combination of quadratic and multiplicative noise terms.

  14. Source emission and model evaluation of formaldehyde from composite and solid wood furniture in a full-scale chamber

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyu; Mason, Mark A.; Guo, Zhishi; Krebs, Kenneth A.; Roache, Nancy F.

    2015-12-01

    This paper describes the measurement and model evaluation of formaldehyde source emissions from composite and solid wood furniture in a full-scale chamber at different ventilation rates for up to 4000 h using ASTM D 6670-01 (2007). Tests were performed on four types of furniture constructed of different materials and from different manufacturers. The data were used to evaluate two empirical emission models, i.e., a first-order and power-law decay model. The experimental results showed that some furniture tested in this study, made only of solid wood and with less surface area, had low formaldehyde source emissions. The effect of ventilation rate on formaldehyde emissions was also examined. Model simulation results indicated that the power-law decay model showed better agreement than the first-order decay model for the data collected from the tests, especially for long-term emissions. This research was limited to a laboratory study with only four types of furniture products tested. It was not intended to comprehensively test or compare the large number of furniture products available in the market place. Therefore, care should be taken when applying the test results to real-world scenarios. Also, it was beyond the scope of this study to link the emissions to human exposure and potential health risks.

  15. Quantifying sediment source contributions in coastal catchments impacted by the Fukushima nuclear accident with carbon and nitrogen elemental concentrations and stable isotope ratios

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; Huon Huon, Sylvain; Onda, Yuichi; Evrard, Olivier

    2016-04-01

    The Fukushima Dai-ichi Nuclear Power Plant accidental release of radioactive contaminants resulted in the significant fallout of radiocesium over several coastal catchments in the Fukushima Prefecture. Radiocesium, considered to be the greatest risk to the short and long term health of the local community, is rapidly bound to fine soil particles and thus is mobilized and transported during soil erosion and runoff processes. As there has been a broad-scale decontamination of rice paddy fields and rural residential areas in the contaminated region, one important long term question is whether there is, or may be, a downstream transfer of radiocesium from forests that covered over 65% of the most contaminated region. Accordingly, carbon and nitrogen elemental concentrations and stable isotope ratios are used to determine the relative contributions of forests and rice paddies to transported sediment in three contaminated coastal catchments. Samples were taken from the three main identified sources: cultivated soils (rice paddies and fields, n=30), forest soils (n=45), and subsoils (channel bank and decontaminated soils, n = 25). Lag deposit sediment samples were obtained from five sampling campaigns that targeted the main hydrological events from October 2011 to October 2014. In total, 86 samples of deposited sediment were analyzed for particulate organic matter elemental concentrations and isotope ratios, 24 from the Mano catchment, 44 from the Niida catchment, and 18 from the Ota catchment. Mann-Whitney U-tests were used to examine the source discrimination potential of this tracing suite and select the appropriate tracers for modelling. The discriminant tracers were modelled with a concentration-dependent distribution mixing model. Preliminary results indicate that cultivated sources (predominantly rice paddies) contribute disproportionately more sediment per unit area than forested regions in these contaminated catchments. Future research will examine if there are areas in particular where forest sources have elevated concentrations and may require some attention in the decontamination and monitoring of potential radiocesium downstream transfers.

  16. High energy variability of 3C 273 during the AGILE multiwavelength campaign of December 2007-January 2008

    NASA Astrophysics Data System (ADS)

    Pacciani, L.; Donnarumma, I.; Vittorini, V.; D'Ammando, F.; Fiocchi, M. T.; Impiombato, D.; Stratta, G.; Verrecchia, F.; Bulgarelli, A.; Chen, A. W.; Giuliani, A.; Longo, F.; Pucella, G.; Vercellone, S.; Tavani, M.; Argan, A.; Barbiellini, G.; Boffelli, F.; Caraveo, P. A.; Cattaneo, P. W.; Cocco, V.; Costa, E.; Del Monte, E.; Di Cocco, G.; Evangelista, Y.; Feroci, M.; Froysland, T.; Fuschino, F.; Galli, M.; Gianotti, F.; Labanti, C.; Lapshov, I.; Lazzarotto, F.; Lipari, P.; Marisaldi, M.; Mereghetti, S.; Morselli, A.; Pellizzoni, A.; Perotti, F.; Picozza, P.; Prest, M.; Rapisarda, M.; Soffitta, P.; Trifoglio, M.; Tosti, G.; Trois, A.; Vallazza, E.; Zanello, D.; Antonelli, L. A.; Colafrancesco, S.; Cutini, S.; Gasparrini, D.; Giommi, P.; Pittori, C.; Salotti, L.

    2009-01-01

    Context: We report the results of a 3-week multi-wavelength campaign targeting the flat spectrum radio quasar 3C 273 carried out with the AGILE gamma-ray mission, covering the 30 MeV-50 GeV and 18-60 keV, the REM observatory (covering the near-IR and optical), Swift (near-UV/Optical, 0.2-10 keV and 15-50 keV), INTEGRAL (3-200 keV) and Rossi XTE (2-12 keV). This is the first observational campaign including gamma-ray data, after the last EGRET observations, more than 8 years ago. Aims: This campaign has been organized by the AGILE team with the aim of observing, studying and modelling the broad band energy spectrum of the source, and its variability on a week timescale, testing the emission models describing the spectral energy distribution of this source. Methods: Our study was carried out using simultaneous light curves of the source flux from all the involved instruments, in the different energy ranges, to search for correlated variability. Then a time-resolved spectral energy distribution was used for a detailed physical modelling of the emission mechanisms. Results: The source was detected in gamma-rays only in the second week of our campaign, with a flux comparable to the level detected by EGRET in June 1991. We found an indication of a possible anti-correlation between the emission at gamma-rays and at soft and hard X-rays, supported by the complete set of instruments. Instead, optical data do not show short term variability, as expected for this source. Only in two preceding EGRET observations (in 1993 and 1997) 3C 273 showed intra-observation variability in gamma-rays. In the 1997 observation, flux variation in gamma-rays was associated with a synchrotron flare. The energy-density spectrum with almost simultaneous data partially covers the regions of synchrotron emission, the big blue bump, and the inverse-Compton. We adopted a leptonic model to explain the hard X/gamma-ray emissions, although from our analysis hadronic models cannot be ruled out. In the adopted model, the soft X-ray emission is consistent with combined synchrotron-self Compton and external Compton mechanisms, while hard X and gamma-ray emissions are compatible with external Compton from thermal photons of the disk. Under this model, the time evolution of the spectral energy distribution is well interpreted and modelled in terms of an acceleration episode of the electron population, leading to a shift in the inverse Compton peak towards higher energies.

  17. Carbon allocation, source-sink relations and plant growth: do we need to revise our carbon centric concepts?

    NASA Astrophysics Data System (ADS)

    Körner, Christian

    2014-05-01

    Since the discovery that plants 'eat air' 215 years ago, carbon supply was considered the largely unquestioned top driver of plant growth. The ease at which CO2 uptake (C source activity) can be measured, and the elegant algorithms that describe the responses of photosynthesis to light, temperature and CO2 concentration, explain why carbon driven growth and productivity became the starting point of all process based vegetation models. Most of these models, nowadays adopt other environmental drivers, such as nutrient availability, as modulating co-controls, but the carbon priority is retained. Yet, if we believe in the basic rules of stoichometry of all life, there is an inevitable need of 25-30 elements other then carbon, oxygen and hydrogen to build a healthy plant body. Plants compete for most of these elements, and their availability (except for N) is finite per unit land area. Hence, by pure plausibility, it is a highly unlikely situation that carbon plays the rate limiting role of growth under natural conditions, except in deep shade or on exceptionally fertile soils. Furthermore, water shortage and low temperature, both act directly upon tissue formation (meristems) long before photosynthetic limitations come into play. Hence, plants will incorporate C only to the extent other environmental drivers permit. In the case of nutrients and mature ecosystems, this sink control of plant growth may be masked in the short term by a tight, almost closed nutrient cycle or by widening the C to other element ratio. Because source and sink activity must match in the long term, it is not possible to identify the hierarchy of growth controls without manipulating the environment. Dry matter allocation to C rich structures and reserves may provide some stoichimetric leeway or periodic escapes from the more fundamental, long-term environmental controls of growth and productivity. I will explain why carbon centric explanations of growth are limited or arrive at plausible answers for the wrong reason. Suggested reading: Fatichi, Leuzinger, Körner (2013) Moving beyond photosynthesis: from carbon source to sink-driven vegetation modeling. New Phytologist. Körner C (2013) Growth controls photosynthesis - mostly. Nova Acta Leopoldina 391:273-283.

  18. Isotopic composition and neutronics of the Okelobondo natural reactor

    NASA Astrophysics Data System (ADS)

    Palenik, Christopher Samuel

    The Oklo-Okelobondo and Bangombe uranium deposits, in Gabon, Africa host Earth's only known natural nuclear fission reactors. These 2 billion year old reactors represent a unique opportunity to study used nuclear fuel over geologic periods of time. The reactors in these deposits have been studied as a means by which to constrain the source term of fission product concentrations produced during reactor operation. The source term depends on the neutronic parameters, which include reactor operation duration, neutron flux and the neutron energy spectrum. Reactor operation has been modeled using a point-source computer simulation (Oak Ridge Isotope Generation and Depletion, ORIGEN, code) for a light water reactor. Model results have been constrained using secondary ionization mass spectroscopy (SIMS) isotopic measurements of the fission products Nd and Te, as well as U in uraninite from samples collected in the Okelobondo reactor zone. Based upon the constraints on the operating conditions, the pre-reactor concentrations of Nd (150 ppm +/- 75 ppm) and Te (<1 ppm) in uraninite were estimated. Related to the burnup measured in Okelobondo samples (0.7 to 13.8 GWd/MTU), the final fission product inventories of Nd (90 to 1200 ppm) and Te (10 to 110 ppm) were calculated. By the same means, the ranges of all other fission products and actinides produced during reactor operation were calculated as a function of burnup. These results provide a source term against which the present elemental and decay abundances at the fission reactor can be compared. Furthermore, they provide new insights into the extent to which a "fossil" nuclear reactor can be characterized on the basis of its isotopic signatures. In addition, results from the study of two other natural systems related to the radionuclide and fission product transport are included. A detailed mineralogical characterization of the uranyl mineralogy at the Bangombe uranium deposit in Gabon, Africa was completed to improve geochemical models of the solubility-limiting phase. A study of the competing effects of radiation damage and annealing in a U-bearing crystal of zircon shows that low temperature annealing in actinide-bearing phases is significant in the annealing of radiation damage.

  19. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  20. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  1. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  2. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  3. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  4. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  5. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    NASA Astrophysics Data System (ADS)

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.

  6. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  7. History and theory in "applied ethics".

    PubMed

    Beauchamp, Tom L

    2007-03-01

    Robert Baker and Laurence McCullough argue that the "applied ethics model" is deficient and in need of a replacement model. However, they supply no clear meaning to "applied ethics" and miss most of what is important in the literature on methodology that treats this question. The Baker-McCullough account of medical and applied ethics is a straw man that has had no influence in these fields or in philosophical ethics. The authors are also on shaky historical grounds in dealing with two problems: (1) the historical source of the notion of "practical ethics" and (2) the historical source of and the assimilation of the term "autonomy" into applied philosophy and professional ethics. They mistakenly hold (1) that the expression "practical ethics" was first used in a publication by Thomas Percival and (2) that Kant is the primary historical source of the notion of autonomy as that notion is used in contemporary applied ethics.

  8. Dispersion modeling of polycyclic aromatic hydrocarbons from combustion of biomass and fossil fuels and production of coke in Tianjin, China.

    PubMed

    Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran

    2006-08-01

    A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.

  9. Short-term dynamics of indoor and outdoor endotoxin exposure: Case of Santiago, Chile, 2012.

    PubMed

    Barraza, Francisco; Jorquera, Héctor; Heyer, Johanna; Palma, Wilfredo; Edwards, Ana María; Muñoz, Marcelo; Valdivia, Gonzalo; Montoya, Lupita D

    2016-01-01

    Indoor and outdoor endotoxin in PM2.5 was measured for the very first time in Santiago, Chile, in spring 2012. Average endotoxin concentrations were 0.099 and 0.094 [EU/m(3)] for indoor (N=44) and outdoor (N=41) samples, respectively; the indoor-outdoor correlation (log-transformed concentrations) was low: R=-0.06, 95% CI: (-0.35 to 0.24), likely owing to outdoor spatial variability. A linear regression model explained 68% of variability in outdoor endotoxins, using as predictors elemental carbon (a proxy of traffic emissions), chlorine (a tracer of marine air masses reaching the city) and relative humidity (a modulator of surface emissions of dust, vegetation and garbage debris). In this study, for the first time a potential source contribution function (PSCF) was applied to outdoor endotoxin measurements. Wind trajectory analysis identified upwind agricultural sources as contributors to the short-term, outdoor endotoxin variability. Our results confirm an association between combustion particles from traffic and outdoor endotoxin concentrations. For indoor endotoxins, a predictive model was developed but it only explained 44% of endotoxin variability; the significant predictors were tracers of indoor PM2.5 dust (Si, Ca), number of external windows and number of hours with internal doors open. Results suggest that short-term indoor endotoxin variability may be driven by household dust/garbage production and handling. This would explain the modest predictive performance of published models that use answers to household surveys as predictors. One feasible alternative is to increase the sampling period so that household features would arise as significant predictors of long-term airborne endotoxin levels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Funding long-term care: applications of the trade-off principle in both public and private sectors.

    PubMed

    Chen, Yung-Ping

    2003-02-01

    The uncertain need for long-term care services is a risk best protected by insurance. However, the current funding relies heavily on personal payment and public welfare, and only lightly on social and private insurances. This method, akin to sitting on a two-legged stool, is unlikely to be sustainable. To incorporate insurance as a key component of funding and to mobilize public and private resources more effectively, we propose a three-legged-stool funding model under which social insurance would provide a basic protection, to be supplemented by private insurance and personal payment. When these sources do not provide sufficient protection for some individuals, Medicaid as public welfare would serve as a safety net. This article (a) discusses how to implement this funding model by using the trade-off principle in both the public and private sectors when resources for long-term care are scarce, and (b) analyzes several objections to this model from cognitive psychology/behavioral economics

  11. Integrated watershed- and farm-scale modeling framework for targeting critical source areas while maintaining farm economic viability.

    PubMed

    Ghebremichael, Lula T; Veith, Tamie L; Hamlett, James M

    2013-01-15

    Quantitative risk assessments of pollution and data related to the effectiveness of mitigating best management practices (BMPs) are important aspects of nonpoint source pollution control efforts, particularly those driven by specific water quality objectives and by measurable improvement goals, such as the total maximum daily load (TMDL) requirements. Targeting critical source areas (CSAs) that generate disproportionately high pollutant loads within a watershed is a crucial step in successfully controlling nonpoint source pollution. The importance of watershed simulation models in assisting with the quantitative assessments of CSAs of pollution (relative to their magnitudes and extents) and of the effectiveness of associated BMPs has been well recognized. However, due to the distinct disconnect between the hydrological scale in which these models conduct their evaluation and the farm scale at which feasible BMPs are actually selected and implemented, and due to the difficulty and uncertainty involved in transferring watershed model data to farm fields, there are limited practical applications of these tools in the current nonpoint source pollution control efforts by conservation specialists for delineating CSAs and planning targeting measures. There are also limited approaches developed that can assess impacts of CSA-targeted BMPs on farm productivity and profitability together with the assessment of water quality improvements expected from applying these measures. This study developed a modeling framework that integrates farm economics and environmental aspects (such as identification and mitigation of CSAs) through joint use of watershed- and farm-scale models in a closed feedback loop. The integration of models in a closed feedback loop provides a way for environmental changes to be evaluated with regard to the impact on the practical aspects of farm management and economics, adjusted or reformulated as necessary, and revaluated with respect to effectiveness of environmental mitigation at the farm- and watershed-levels. This paper also outlines steps needed to extract important CSA-related information from a watershed model to help inform targeting decisions at the farm scale. The modeling framework is demonstrated with two unique case studies in the northeastern United States (New York and Vermont), with supporting data from numerous published, location-specific studies at both the watershed and farm scales. Using the integrated modeling framework, it can be possible to compare the costs (in terms of changes required in farm system components or financial compensations for retiring crop lands) and benefits (in terms of measurable water quality improvement goals) of implementing targeted BMPs. This multi-scale modeling approach can be used in the multi-objective task of mitigating CSAs of pollution to meet water quality goals while maintaining farm-level economic viability. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Sources and Fate of Reactive Carbon over North America

    NASA Astrophysics Data System (ADS)

    Chen, X.; Millet, D. B.; Singh, H. B.; Wisthaler, A.

    2016-12-01

    We apply a high-resolution chemical transport model (GEOS-Chem CTM at 0.25°×0.3125°) to generate, a comprehensive gas-phase reactive carbon budget over North America. Based on state-of-science source inventories and known chemistry, we find in the model that biogenic sources dominate the overall reactive carbon budget, with 49, 15, 4, and 39 TgC, respectively, introduced to the North American atmosphere from the biosphere, anthropogenic sources, fires, and from methane oxidation in 2013. Biogenic and anthropogenic non-methane volatile organic compounds contribute 60% and 10%, respectively, to the total OH reactivity over the Southeast US, along with other contributions from methane and inorganics. Oxidation to CO and CO2 then represents the overwhelming fate of that reactive carbon, with 65, 15, 7 and 5 TgC, respectively, oxidized to produce CO/CO2, dry deposited, wet deposited and transported (net) out of North America. We confront this simulation with an ensemble of recent airborne measurements over North America (SEAC4RS, SENEX, DISCOVER-AQ, DC3) and interpret the model-measurement comparisons in terms of their implications for current understanding of atmospheric reactive carbon and the processes driving its distribution.

  13. Upper and lower bounds of ground-motion variabilities: implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino

    2017-04-01

    One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).

  14. Toward Better Intraseasonal and Seasonal Prediction: Verification and Evaluation of the NOGAPS Model Forecasts

    DTIC Science & Technology

    2013-09-30

    Circulation (HC) in terms of the meridional streamfunction. The interannual variability of the Atlantic HC in boreal summer was examined using the EOF...large-scale circulations in the NAVGEM model and the source of predictability for the seasonal variation of the Atlantic TCs. We have been working...EOF analysis of Meridional Circulation (JAS). (a) The leading mode (M1); (b) variance explained by the first 10 modes. 9

  15. ENERGY AND OUR ENVIRONMENT: A SYSTEMS AND LIFE ...

    EPA Pesticide Factsheets

    This is a presentation to the North Carolina BREATE Conference on March 28, 2017. This presentation provides an overview of energy modeling capabilities in ORD, and includes examples related to scenario development, water-energy nexus, bioenergy, etc. The focus is on system approaches as well as life cycle assessment data and tools. Provide an overview of system and life cycle approaches to modeling medium to long-term changes in drivers of changes in emissions sources.

  16. Intraseasonal Variability in the Atmosphere-Ocean Climate System. Second Edition

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Waliser, Duane E.

    2011-01-01

    Understanding and predicting the intraseasonal variability (ISV) of the ocean and atmosphere is crucial to improving long-range environmental forecasts and the reliability of climate change projections through climate models. This updated, comprehensive and authoritative second edition has a balance of observation, theory and modeling and provides a single source of reference for all those interested in this important multi-faceted natural phenomenon and its relation to major short-term climatic variations.

  17. Performance evaluation of a permanent ring magnet based helicon plasma source for negative ion source research

    NASA Astrophysics Data System (ADS)

    Pandey, Arun; Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.

    2017-10-01

    Helicon wave heated plasmas are much more efficient in terms of ionization per unit power consumed. A permanent magnet based compact helicon wave heated plasma source is developed in the Institute for Plasma Research, after carefully optimizing the geometry, the frequency of the RF power, and the magnetic field conditions. The HELicon Experiment for Negative ion-I source is the single driver helicon plasma source that is being studied for the development of a large sized, multi-driver negative hydrogen ion source. In this paper, the details about the single driver machine and the results from the characterization of the device are presented. A parametric study at different pressures and magnetic field values using a 13.56 MHz RF source has been carried out in argon plasma, as an initial step towards source characterization. A theoretical model is also presented for the particle and power balance in the plasma. The ambipolar diffusion process taking place in a magnetized helicon plasma is also discussed.

  18. Performance evaluation of a permanent ring magnet based helicon plasma source for negative ion source research.

    PubMed

    Pandey, Arun; Bandyopadhyay, M; Sudhir, Dass; Chakraborty, A

    2017-10-01

    Helicon wave heated plasmas are much more efficient in terms of ionization per unit power consumed. A permanent magnet based compact helicon wave heated plasma source is developed in the Institute for Plasma Research, after carefully optimizing the geometry, the frequency of the RF power, and the magnetic field conditions. The HELicon Experiment for Negative ion-I source is the single driver helicon plasma source that is being studied for the development of a large sized, multi-driver negative hydrogen ion source. In this paper, the details about the single driver machine and the results from the characterization of the device are presented. A parametric study at different pressures and magnetic field values using a 13.56 MHz RF source has been carried out in argon plasma, as an initial step towards source characterization. A theoretical model is also presented for the particle and power balance in the plasma. The ambipolar diffusion process taking place in a magnetized helicon plasma is also discussed.

  19. Theoretical and numerical study of axisymmetric lattice Boltzmann models

    NASA Astrophysics Data System (ADS)

    Huang, Haibo; Lu, Xi-Yun

    2009-07-01

    The forcing term in the lattice Boltzmann equation (LBE) is usually used to mimic Navier-Stokes equations with a body force. To derive axisymmetric model, forcing terms are incorporated into the two-dimensional (2D) LBE to mimic the additional axisymmetric contributions in 2D Navier-Stokes equations in cylindrical coordinates. Many axisymmetric lattice Boltzmann D2Q9 models were obtained through the Chapman-Enskog expansion to recover the 2D Navier-Stokes equations in cylindrical coordinates [I. Halliday , Phys. Rev. E 64, 011208 (2001); K. N. Premnath and J. Abraham, Phys. Rev. E 71, 056706 (2005); T. S. Lee, H. Huang, and C. Shu, Int. J. Mod. Phys. C 17, 645 (2006); T. Reis and T. N. Phillips, Phys. Rev. E 75, 056703 (2007); J. G. Zhou, Phys. Rev. E 78, 036701 (2008)]. The theoretical differences between them are discussed in detail. Numerical studies were also carried out by simulating two different flows to make a comparison on these models’ accuracy and τ sensitivity. It is found all these models are able to obtain accurate results and have the second-order spatial accuracy. However, the model C [J. G. Zhou, Phys. Rev. E 78, 036701 (2008)] is the most stable one in terms of τ sensitivity. It is also found that if density of fluid is defined in its usual way and not directly relevant to source terms, the lattice Boltzmann model seems more stable.

  20. Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.

    2015-01-01

    The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.

  1. Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    2013-01-01

    The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.

  2. Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study

    NASA Astrophysics Data System (ADS)

    Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.

    2008-06-01

    Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.

  3. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  4. The integration of familiarity and recollection information in short-term recognition: modeling speed-accuracy trade-off functions.

    PubMed

    Göthe, Katrin; Oberauer, Klaus

    2008-05-01

    Dual process models postulate familiarity and recollection as the basis of the recognition process. We investigated the time-course of integration of the two information sources to one recognition judgment in a working memory task. We tested 24 subjects with a response signal variant of the modified Sternberg recognition task (Oberauer, 2001) to isolate the time course of three different probe types indicating different combinations of familiarity and source information. We compared two mathematical models implementing different ways of integrating familiarity and recollection. Within each model, we tested three assumptions about the nature of the familiarity signal, with familiarity having (a) only positive values, indicating similarity of the probe with the memory list, (b) only negative values, indicating novelty, or (c) both positive and negative values. Both models provided good fits to the data. A model combining the outputs of both processes additively (Integration Model) gave an overall better fit to the data than a model based on a continuous familiarity signal and a probabilistic all-or-none recollection process (Dominance Model).

  5. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  6. Part 1 of a Computational Study of a Drop-Laden Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora A.; Bellan, Josette

    2004-01-01

    This first of three reports on a computational study of a drop-laden temporal mixing layer presents the results of direct numerical simulations (DNS) of well-resolved flow fields and the derivation of the large-eddy simulation (LES) equations that would govern the larger scales of a turbulent flow field. The mixing layer consisted of two counterflowing gas streams, one of which was initially laden with evaporating liquid drops. The gas phase was composed of two perfect gas species, the carrier gas and the vapor emanating from the drops, and was computed in an Eulerian reference frame, whereas each drop was tracked individually in a Lagrangian manner. The flow perturbations that were initially imposed on the layer caused mixing and eventual transition to turbulence. The DNS database obtained included transitional states for layers with various liquid mass loadings. For the DNS, the gas-phase equations were the compressible Navier-Stokes equations for conservation of momentum and additional conservation equations for total energy and species mass. These equations included source terms representing the effect of the drops on the mass, momentum, and energy of the gas phase. From the DNS equations, the expression for the irreversible entropy production (dissipation) was derived and used to determine the dissipation due to the source terms. The LES equations were derived by spatially filtering the DNS set and the magnitudes of the terms were computed at transitional states, leading to a hierarchy of terms to guide simplification of the LES equations. It was concluded that effort should be devoted to the accurate modeling of both the subgridscale fluxes and the filtered source terms, which were the dominant unclosed terms appearing in the LES equations.

  7. A Physical Basis for M s-Yield Scaling in Hard Rock and Implications for Late-Time Damage of the Source Medium

    DOE PAGES

    Patton, Howard John

    2016-04-11

    Surface wave magnitude M s for a compilation of 72 nuclear tests detonated in hard rock media for which yields and burial depths have been reported in the literature is shown to scale with yield W as a + b × log[W], where a = 2.50 ± 0.08 and b = 0.80 ± 0.05. While the exponent b is consistent with an M s scaling model for fully coupled, normal containment-depth explosions, the intercept a is offset 0.45 magnitude units lower than the model. The cause of offset is important to understand in terms of the explosion source. Hard rockmore » explosions conducted in extensional and compressional stress regimes show similar offsets, an indication that the tectonic setting in which an explosion occurs plays no role causing the offset. The scaling model accounts for the effects of source medium material properties on the generation of 20-s period Rayleigh wave amplitudes. Aided by thorough characterizations of the explosion and tectonic release sources, an extensive analysis of the 1963 October 26 Shoal nuclear test detonated in granite 27 miles southeast of Fallon NV shows that the offset is consistent with the predictions of a material damage source model related to non-linear stress wave interactions with the free surface. This source emits Rayleigh waves with polarity opposite to waves emitted by the explosion. The Shoal results were extended to analyse surface waves from the 1962 February 15 Hardhat nuclear test, the 1988 September 14 Soviet Joint Verification Experiment, and the anomalous 1979 August 18 northeast Balapan explosion which exhibits opposite polarity, azimuth-independent source component U1 compared to an explosion. Modelling these tests shows that Rayleigh wave amplitudes generated by the damage source are nearly as large as or larger than amplitudes from the explosion. As such, destructive interference can be drastic, introducing metastable conditions due to the sensitivity of reduced amplitudes to Rayleigh wave initial phase angles of the explosion and damage sources. This meta-stability is a likely source of scatter in M s-yield scaling observations. The agreement of observed scaling exponent b with the model suggests that the damage source strength does not vary much with yield, in contrast to explosions conducted in weak media where Ms scaling rates are greater than the model predicts, and the yield dependence of the damage source strength is significant. This difference in scaling behaviour is a consequence of source medium material properties.« less

  8. A Physical Basis for M s-Yield Scaling in Hard Rock and Implications for Late-Time Damage of the Source Medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Howard John

    Surface wave magnitude M s for a compilation of 72 nuclear tests detonated in hard rock media for which yields and burial depths have been reported in the literature is shown to scale with yield W as a + b × log[W], where a = 2.50 ± 0.08 and b = 0.80 ± 0.05. While the exponent b is consistent with an M s scaling model for fully coupled, normal containment-depth explosions, the intercept a is offset 0.45 magnitude units lower than the model. The cause of offset is important to understand in terms of the explosion source. Hard rockmore » explosions conducted in extensional and compressional stress regimes show similar offsets, an indication that the tectonic setting in which an explosion occurs plays no role causing the offset. The scaling model accounts for the effects of source medium material properties on the generation of 20-s period Rayleigh wave amplitudes. Aided by thorough characterizations of the explosion and tectonic release sources, an extensive analysis of the 1963 October 26 Shoal nuclear test detonated in granite 27 miles southeast of Fallon NV shows that the offset is consistent with the predictions of a material damage source model related to non-linear stress wave interactions with the free surface. This source emits Rayleigh waves with polarity opposite to waves emitted by the explosion. The Shoal results were extended to analyse surface waves from the 1962 February 15 Hardhat nuclear test, the 1988 September 14 Soviet Joint Verification Experiment, and the anomalous 1979 August 18 northeast Balapan explosion which exhibits opposite polarity, azimuth-independent source component U1 compared to an explosion. Modelling these tests shows that Rayleigh wave amplitudes generated by the damage source are nearly as large as or larger than amplitudes from the explosion. As such, destructive interference can be drastic, introducing metastable conditions due to the sensitivity of reduced amplitudes to Rayleigh wave initial phase angles of the explosion and damage sources. This meta-stability is a likely source of scatter in M s-yield scaling observations. The agreement of observed scaling exponent b with the model suggests that the damage source strength does not vary much with yield, in contrast to explosions conducted in weak media where Ms scaling rates are greater than the model predicts, and the yield dependence of the damage source strength is significant. This difference in scaling behaviour is a consequence of source medium material properties.« less

  9. Source contributions of fine particulate matter during one winter haze episodes in Xi'an, China

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wu, Q.

    2017-12-01

    Long-term exposure to high levels of fine particulate matter (PM2.5) is found to be associated with adverse effects on human health, ecological environment and climate change. Identification the major source regions of fine particulate matter are essential to proposing proper joint prevention and control strategies for heavy haze mitigation. In this work, the Comprehensive Air Quality Model with extensions (CAMx) together with the Particulate Source Apportionment Technology (PSAT) and the Weather Research and Forecast Model (WRF), have been applied to analyze the major source regions of PM2.5 in Xi'an during the heavy haze episodes in winter (29, December, 2016 - 5 January 2017), and the framework of the model system is shown in Fig. 1. Firstly, according to the model evaluation of the daily PM2.5 concentrations for the two months, the model has well performance, and the fraction of predictions within a factor of 2 of the observations (FAC2) is 84%, while the correlation coefficient (R) is 0.80 in Xi'an. By using the PSAT in CAMx model, a detailed source region contribution matrix is derived for all points within the Xi'an region and its six surrounding areas, and long-range regional transport. The results show that the local emission in Xi'an is the mainly sources at downtown area, which contributing 72.9% as shown in Fig.2, and the contribution rate of transportations between adjacent areas depends on wind direction. Meanwhile, three different suburban areas selected for detailed analysis in fine particles sources. Comparing to downtown area, the sources of suburban areas are more multiply, and the transportations make the contribution 40%-82%. In the suburban areas, regional inflows play an important role in the fine particles concentrations, indicating a strong need for regional joint emission control efforts. The results enhance the quantitative understanding of the PM2.5 source regions and provide a basis for policymaking to advance the control of pollution in Xi'an, China.

  10. Comparison of debris flux models

    NASA Astrophysics Data System (ADS)

    Sdunnus, H.; Beltrami, P.; Klinkrad, H.; Matney, M.; Nazarenko, A.; Wegener, P.

    The availability of models to estimate the impact risk from the man-made space debris and the natural meteoroid environment is essential for both, manned and unmanned satellite missions. Various independent tools based on different approaches have been developed in the past years. Due to an increased knowledge of the debris environment and its sources e.g. from improved measurement capabilities, these models could be updated regularly, providing more detailed and more reliable simulations. This paper addresses an in-depth, quantitative comparison of widely distributed debris flux models which were recently updated, namely ESA's MASTER 2001 model, NASA's ORDEM 2000 and the Russian SDPA 2000 model. The comparison was performed in the frame of the work of the 20t h Interagency Debris Coordination (IADC) meeting held in Surrey, UK. ORDEM 2000ORDEM 2000 uses careful empirical estimates of the orbit populations based onthree primary data sources - the US Space Command Catalog, the H ystackaRadar, and the Long Duration Exposure Facility spacecraft returned surfaces.Further data (e.g. HAX and Goldstone radars, impacts on Shuttle windows andradiators, and others) were used to adjust these populations for regions in time,size, and space not covered by the primary data sets. Some interpolation andextrapolation to regions with no data (such as projections into the future) wasprovided by the EVOLVE model. MASTER 2001The ESA MASTER model offers a full three dimensional description of theterrestrial debris distribution reaching from LEO up to the GEO region. Fluxresults relative to an orbiting target or to an inertial volume can be resolved intosource terms, impactor characteristics and orbit, as well as impact velocity anddirection. All relevant debris source terms are considered by the MASTERmodel. For each simulated source, a corresponding debris generation model interms of mass/diameter distribution, additional velocities, and directionalspreading has been developed. A comprehensive perturbation model was used topropagate all objects to a reference epoch. SDPA 2000The Russian Space Debris Prediction and Analysis (SDPA) model is the semi-analytical stochastic tool for medium- and long-term forecast of the man-madedebris environment (with size larger than 1 mm), for construction of spatialdensity and velocity distribution in LEO and GEO as well as for risk evaluation.The last version of SDPA 2000 consists of ten individual modules related to theaforementioned tasks. The total characteristics of space debris of the differentsizes are considered (without partition of these characteristics into specificsources). The current space debris environment is characterised a) by the spatialdensity dependence on the altitude and latitude of a point, as well as on size ofobjects and b) by a statistical distribution of the magnitude and direction of spaceobjects velocities in an inertial geocentric coordinate system. Thesecharacteristics are constructed on the basis of the complex application of theaccessible measuring information and series of a priori data. The comparison is performed by applying the models to a large number of target orbits specified by a grid in terms of impactor size (6 gridpoints), target orbit perigee altitude (16 gridpoints), and target orbit inclination (15 gridpoints). These result provide a characteristic diagram of integral fluxes for all models, which will be compared. Further to this, the models are applied to orbits of particular interest, namely the ISS orbit, and a sun-synchronous orbit. For these cases, the comparison will include the comparison of flux directionality and velocity. References 1. Liou, J.-C., M. J. Matney, P. D. Anz-Meador, D. Kessler, M. Jansen, and J. R.Theall, 2001, "The New NASA Orbital Debris Engineering ModelORDEM2000", NASA/TP-2002-210780. 2. P. Wegener, J. Bendisch, K.D. Bunte, H. Sdunnus; Upgrade of the ESAMASTER Model; Final Report of ESOC/TOS-GMA contract 12318/97/D/IM;May 2000 3. A.I. Nazarenko, I.L. Menchikov. Engineering Model of Space DebrisEnvironment. Third European Conference on Space Debris, Darmstadt,Germany, March 2001.

  11. Acoustic constituents of prosodic typology

    NASA Astrophysics Data System (ADS)

    Komatsu, Masahiko

    Different languages sound different, and considerable part of it derives from the typological difference of prosody. Although such difference is often referred to as lexical accent types (stress accent, pitch accent, and tone; e.g. English, Japanese, and Chinese respectively) and rhythm types (stress-, syllable-, and mora-timed rhythms; e.g. English, Spanish, and Japanese respectively), it is unclear whether these types are determined in terms of acoustic properties, The thesis intends to provide a potential basis for the description of prosody in terms of acoustics. It argues for the hypothesis that the source component of the source-filter model (acoustic features) approximately corresponds to prosody (linguistic features) through several experimental-phonetic studies. The study consists of four parts. (1) Preliminary experiment: Perceptual language identification tests were performed using English and Japanese speech samples whose frequency spectral information (i.e. non-source component) is heavily reduced. The results indicated that humans can discriminate languages with such signals. (2) Discussion on the linguistic information that the source component contains: This part constitutes the foundation of the argument of the thesis. Perception tests of consonants with the source signal indicated that the source component carries the information on broad categories of phonemes that contributes to the creation of rhythm. (3) Acoustic analysis: The speech samples of Chinese, English, Japanese, and Spanish, differing in prosodic types, were analyzed. These languages showed difference in acoustic characteristics of the source component. (4) Perceptual experiment: A language identification test for the above four languages was performed using the source signal with its acoustic features parameterized. It revealed that humans can discriminate prosodic types solely with the source features and that the discrimination is easier as acoustic information increases. The series of studies showed the correspondence of the source component to prosodic features. In linguistics, prosodic types have not been discussed purely in terms of acoustics; they are usually related to the function of prosody or phonological units such as phonemes. The present thesis focuses on acoustics and makes a contribution to establishing the crosslinguistic description system of prosody.

  12. The Life of Meaning: A Model of the Positive Contributions to Well-Being from Veterinary Work.

    PubMed

    Cake, Martin A; Bell, Melinda A; Bickley, Naomi; Bartram, David J

    2015-01-01

    We present a veterinary model of work-derived well-being, and argue that educators should not only present a (potentially self-fulfilling) stress management model of future wellness, but also balance this with a positive psychology-based approach depicting a veterinary career as a richly generative source of satisfaction and fulfillment. A review of known sources of satisfaction for veterinarians finds them to be based mostly in meaningful purpose, relationships, and personal growth. This positions veterinary well-being within the tradition of eudaimonia, an ancient concept of achieving one's best possible self, and a term increasingly employed to describe well-being derived from living a life that is engaging, meaningful, and deeply fulfilling. The theory of eudaimonia for workplace well-being should inform development of personal resources that foster resilience in undergraduate and graduate veterinarians.

  13. Estimates of ground level TSP, SO sub 2 and HCI for a municipal waste incinerator to be located at Tynes Bay - Bermuda

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent Simmons, J.A.; Knap, A.H.

    1991-04-01

    The computer model Industrial Source Complex Short Term (ISCST) was used to study the stack emissions from a refuse incinerator proposed for the inland of Bermuda. The model predicts that the highest ground level pollutant concentrations will occur near Prospect, 800 m to 1,000 m due south of the stack. The authors installed a portable laboratory and instruments at Prospect to begin making air quality baseline measurements. By comparing the model's estimates of the incinerator contribution to the background levels measured at the site they predicted that stack emissions would not cause an increase in TSP or SO{sub 2}. Themore » incinerator will be a significant source of HCI to Bermuda air with ambient levels approaching air quality guidelines.« less

  14. Physical Education and Reading: A Winning Team.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee.

    The purposes of this booklet are to acquaint physical education teachers with the meanings of some terms used in reading that are related to physical education, to acquaint physical education teachers with reading skills that can be taught or reinforced through physical education activities, to provide a source or model of such activities, and to…

  15. Openness, Web 2.0 Technology, and Open Science

    ERIC Educational Resources Information Center

    Peters, Michael A.

    2010-01-01

    Open science is a term that is being used in the literature to designate a form of science based on open source models or that utilizes principles of open access, open archiving and open publishing to promote scientific communication. Open science increasingly also refers to open governance and more democratized engagement and control of science…

  16. A simulated approach to estimating PM10 and PM2.5 concentrations downwind from cotton gins

    USDA-ARS?s Scientific Manuscript database

    Cotton gins are required to obtain operating permits from state air pollution regulatory agencies (SAPRA), which regulate the amount of particulate matter that can be emitted. Industrial Source Complex Short Term version 3 (ISCST3) is the Gaussian dispersion model currently used by some SAPRAs to pr...

  17. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  18. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  19. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  20. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

Top